You're reading the documentation for a development version. For the latest released version, please have a look at v0.11.

aspecd.model module

Numerical models

Models are defined by (constant) parameters and variables the model is evaluated for. The variables can be thought of as the axes values of the resulting (calculated) dataset.

As a simple example, consider a polynomial defined by its (constant) coefficients. The model will evaluate the polynomial for the values, and the result will be a aspecd.dataset.CalculatedDataset object containing the values of the evaluated model in its data, and the variables as its axes values.

Models can be seen as abstraction to simulations in some regard. In this respect, they will play a central role in conjunction with fitting models to data by adjusting their respective parameters, a quite general approach in science and particularly in spectroscopy.

A bit of terminology

parameters :

constant parameters (sometimes termed coefficients) characterising the model

Example: In case of a polynomial, the coefficients would be the parameters of the model.

variables :

values to evaluate the model for

Example: In case of a polynomial, the x values the model is evaluated for would be the variables, with the y values being the corresponding depending values dictated by the model and its parameters.

Models provided within this module

Besides providing the basis for models for the ASpecD framework, this module comes with a (growing) number of general-purpose models useful for basically all kinds of spectroscopic data.

Here is a list as a first overview. For details, see the detailed documentation of each of the classes, readily accessible by the link.

Primitive models

Primitive models are mainly used to create test datasets that can be operated on afterwards. The particular strength and beauty of wrapping essential one-liners of code with a full-fledged model class is twofold: These classes return ASpecD datasets, and you can work completely in context of recipe-driven data analysis, requiring no actual programming skills.

If nothing else, these primitive models can serve as a way to create datasets with fixed data dimensions. Those datasets may be used as templates for more advanced models, by using the aspecd.model.Model.from_dataset() method.

Having that said, here you go with a list of primitive models:

Mathematical models

Besides the primitive models listed above, there is a growing number of mathematical models implementing comparably simple mathematical equations that are often used. Packages derived from the ASpecD framework may well define more specific models as well.

  • aspecd.model.Polynomial

    Polynomial (of arbitrary degree/order, depending on the number of coefficients)

  • aspecd.model.Gaussian

    Generalised Gaussian where amplitude, position, and width can be set explicitly. Hence, this is usually not identical to the probability density function (PDF) of a normally distributed random variable.

  • aspecd.model.NormalisedGaussian

    Normalised Gaussian with an integral of one, identical to the probability density function (PDF) of a normally distributed random variable.

  • aspecd.model.Lorentzian

    Generalised Lorentzian where amplitude, position, and width can be set explicitly. Hence, this is usually not identical to the probability density function (PDF) of the Cauchy distribution.

  • aspecd.model.NormalisedLorentzian

    Normalised Lorentzian with an integral of one, identical to the probability density function (PDF) of the Cauchy distribution.

  • aspecd.model.Voigtian

    Voigt profile: convolution of (normalised) Lorentzian and (normalised) Gaussian, often used to describe spectroscopic data.

  • aspecd.model.Sine

    Sine wave with adjustable amplitude, frequency, and phase.

  • aspecd.model.Exponential

    Exponential function with adjustable prefactor and rate.

Composite models consisting of a sum of individual models

Often you encounter situations where a model consists of a (weighted) sum of individual models. A simple example would be a damped oscillation. Or think of a spectral line consisting of several overlapping individual lines (Lorentzian or Gaussian).

All this can be easily set up using the aspecd.model.CompositeModel class that lets you conveniently specify a list of models, their individual parameters, and optional weights.

Family of curves

Systematically varying one parameter at a time for a given model is key to understanding the impact this parameter has. Therefore, automatically creating a family of curves with one parameter varied is quite convenient.

To achieve this, use the class aspecd.model.FamilyOfCurves that will take the name of a model (needs to be the name of an existing model class) and create a family of curves for this model, adding the name of the parameter as quantity to the additional axis.

Writing your own models

All models should inherit from the aspecd.model.Model class. Furthermore, they should conform to a series of requirements:

  • Parameters are stored in the aspecd.model.Model.parameters dict.

    Note that this is a dict. In the simplest case, you may name the corresponding key “coefficients”, as in case of a polynomial. In other cases, there are common names for parameters, such as “mu” and “sigma” for a Gaussian. Whether the keys should be named this way or describe the actual meaning of the parameter is partly a matter of personal taste. Use whatever is more common in the given context, but tend to be descriptive. Usually, implementing mathematical equations by simply naming every variable according to the mathematical notation is a bad idea, as the programmer will not know what these variables represent.

  • Models create calculated datasets of class aspecd.dataset.CalculatedDataset.

    The data of these datasets need to have dimensions corresponding to the variables set for the model. Think of the variables as being the axes values of the resulting dataset.

    The _origdata property of the dataset is automatically set accordingly (see below for details). This is crucially important to have the resulting dataset work as expected, including undo and redo functionality within the ASpecD framework. Remember: A calculated dataset is a regular dataset, and you can perform all the tasks with you would do with other datasets, including processing, analysis and alike.

  • Model creation takes place entirely in the non-public _perform_task method of the model.

    This method gets called from aspecd.model.Model.create(), but not before some background checks have been performed, including preparing the metadata of the aspecd.dataset.CalculatedDataset object returned by aspecd.model.Model.create().

    After calling out to _perform_task, the axes of the aspecd.dataset.CalculatedDataset object returned by aspecd.model.Model.create() are set accordingly, i.e. fitting to the shape of the data.

On the other hand, a series of things will be automatically taken care of for you:

  • Metadata of the resulting aspecd.dataset.CalculatedDataset object are automatically set, including type (set to the full class name of the model) and parameters (copied over from the parameters attribute of the model).

  • Axes of the resulting aspecd.dataset.CalculatedDataset object are automatically adjusted according to the size and content of the aspecd.model.Model.variables attribute.

    In case you used aspecd.model.Model.from_dataset(), the axes from the dataset will be copied over from there.

  • The _origdata property of the dataset is automatically set accordingly. This is crucially important to have the resulting dataset work as expected, including undo and redo functionality within the ASpecD framework.

Make sure your models do not raise errors such as ZeroDivisionError depending on the parameters set. Use the aspecd.utils.not_zero() function where appropriate. This is particularly important in light of using models in the context of automated fitting.

Module documentation

class aspecd.model.Model

Bases: ToDictMixin

Base class for numerical models.

Models are defined by (constant) parameters and variables the model is evaluated for. The variables can be thought of as the axes values of the resulting (calculated) dataset.

As a simple example, consider a polynomial defined by its (constant) coefficients. The model will evaluate the polynomial for the values, and the result will be a aspecd.dataset.CalculatedDataset object containing the values of the evaluated model in its data, and the variables as its axes values.

Models can be seen as abstraction to simulations in some regard. In this respect, they will play a central role in conjunction with fitting models to data by adjusting their respective parameters, a quite general approach in science and particularly in spectroscopy.

name

Name of the model.

Defaults to the lower-case class name, don’t change!

Type:

str

parameters

constant parameters characterising the model

Type:

dict

variables

values to evaluate the model for

Usually numpy.ndarray arrays, one for each variable

The variables will become the values of the respective axes.

Type:

list

description

Short description, to be set in class definition

Type:

str

references

List of references with relevance for the implementation of the model.

Use appropriate record types from the bibrecord package.

Type:

list

label

Label that will be applied to the calculated dataset

Usually, labels provide a short and concise description of a dataset, at least in a given context.

Type:

str

axes

List of dicts containing quantity and unit for each axis.

Needs to have the same length as the axes of the created dataset.

If you would like to skip one axis, set it to an empty dict, None, or False (i.e., anything that evaluates to False in Python). This is particularly helpful with models such as FamilyOfCurves that auto-generate one axis.

Type:

list

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Defining a model in a recipe generally follows the strategy shown below, although creating an instance of the base class Model will usually not help, as you need to instantiate a concrete model:

kind: model
type: Model
properties:
  parameters:
    foo: 42
    bar: 21
from_dataset: dataset_label
result: foo

Note that you can refer to datasets and results created during cooking of a recipe using their respective labels. Those labels will automatically be replaced by the actual dataset/result prior to performing the task.

Here, we have used this for the parameter from_dataset in the above recipe excerpt. For a Model object, you can set the variables explicitly. However, in context of a recipe, this is rarely useful. Therefore, the from_dataset parameter lets you refer to a dataset (by its label used within the recipe) that is used to call the Model.from_dataset() method with to obtain the variables from this dataset.

Changed in version 0.3: New attribute description

Changed in version 0.3: New non-public method _sanitise_parameters()

Changed in version 0.4: New attribute references

Changed in version 0.6: New attributes label and axes

create()

Create dataset containing the evaluated model as data

The actual model creation should be implemented within the non-public method _perform_task(). Furthermore, you should make sure your model will be evaluated for the values given in aspecd.model.Model.values and the resulting dataset having set the axes appropriately.

Furthermore, don’t forget to set the _origdata property of the dataset, usually simply by copying the data property over there after it has been filled with content. This is crucially important to have the resulting dataset work as expected, including undo and redo functionality within the ASpecD framework. Remember: A calculated dataset is a regular dataset, and you can perform all the tasks with you would do with other datasets, including processing, analysis and alike.

Returns:

dataset – Calculated dataset containing the evaluated model as data

Return type:

aspecd.dataset.CalculatedDataset

Raises:

aspecd.exceptions.MissingParameterError – Raised if either parameters or values are not set

evaluate()

Evaluate model and return numerical data without any checks.

Important

Usually, you should always use create() and obtain a dataset based on the model. However, create() performs a lot of additional checks. Therefore, if you are sure to have set all properties as necessary and are interested in a probably much faster evaluation of the model for a given set of parameters, e.g. in context of fitting, this is the method of choice.

Returns:

data – Numerical data of the model

Return type:

np.array

Added in version 0.7.

from_dataset(dataset=None)

Obtain crucial information from an existing dataset.

Often, models should be calculated for the same values as an existing dataset. Therefore, you can set the aspecd.model.Model.values property from a given dataset.

If you get the variables from an existing dataset, the calculated dataset containing the evaluated model will have the same axes settings. Thus, it is pretty convenient to get a model with identical axes, including quantity etcetera. This helps a lot with plotting both, an (experimental) dataset and the model, in one plot.

Parameters:

dataset (aspecd.dataset.Dataset) – Dataset to obtain crucial information for building the model from

Raises:

aspecd.exceptions.MissingDatasetError – Raised if no dataset is provided

from_dict(dict_=None)

Set attributes from dictionary.

Parameters:

dict (dict) – Dictionary containing information of a task.

Raises:

aspecd.plotting.MissingDictError – Raised if no dict is provided.

class aspecd.model.CompositeModel

Bases: Model

Composite model consisting of a weighted contributions of individual models.

Individual models can either be added up (default) or multiplied, depending on which operators are provided. Both situations occur frequently. If you would like to describe a spectrum as sum of Gaussian or Lorentzian lines, you need to add the individual contributions. If you would like to model a damped oscillation, you would need to multiply the exponential decay onto the oscillation.

models

Names of the models the composite model consists of

Each name needs to be the name of an existing model class.

Type:

list

parameters

Constant parameters characterising each individual model

For the parameters that can (and need to) be set, consult the documentation of each of the respective model classes specified in the models attribute.

Type:

list

weights

Factors used to weight the individual models.

Default: no weighting

Type:

list

operators

Operators to be used for the individual models.

Addition (“+”, “add”, “plus”) and multiplication (“*”, “multiply”, “times”) are supported.

Note that one operator less than models needs to be provided.

Default: add

Type:

list

Raises:

IndexError – Raised if number of models, parameter sets, operators, and weights are incompatible

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to describe your data with a model consisting of two Lorentzian line shapes. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your model:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [0, 20]
  result: dummy

- kind: model
  type: CompositeModel
  from_dataset: dummy
  properties:
    models:
      - Lorentzian
      - Lorentzian
    parameters:
      - position: 5
      - position: 8
  result: multiple_lorentzians

Note that you need to provide parameters for each of the individual models, even if the class for a model would work without explicitly providing parameters.

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

While adding up the contributions of the individual components works well for describing spectra, sometimes you need to multiply contributions. Suppose you would want to create a damped oscillation consisting of a sine and an exponential. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your model:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [0, 20]
  result: dummy

- kind: model
  type: CompositeModel
  from_dataset: dummy
  properties:
    models:
      - Sine
      - Exponential
    parameters:
      - frequency: 1
        phase: 1.57
      - rate: -0.2
    operators:
      - multiply
  result: damped_oscillation

Again, you need to provide parameters for each of the individual models, even if the class for a model would work without explicitly providing parameters.

Added in version 0.3.

class aspecd.model.FamilyOfCurves

Bases: Model

Create a family of curves for a model, varying a single parameter.

Systematically varying one parameter at a time for a given model is key to understanding the impact this parameter has. Therefore, automatically creating a family of curves with one parameter varied is quite convenient.

This class will take the name of a model (needs to be the name of an existing model class) and create a family of curves for this model, adding the name of the parameter as quantity to the additional axis.

model

Name of the model the family of curves should be calculated for

Needs to be the name of an existing model class.

Type:

str

vary

Name and values of the parameter to be varied

parameterstr

Name of the parameter that should be varied

valueslist

Values of the parameter to be varied

Type:

dict

Raises:

ValueError – Raised if no model is provided

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a family of curves of a Gaussian with varying the width. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your family of curves:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: FamilyOfCurves
  from_dataset: dummy
  properties:
    model: Gaussian
    vary:
      parameter: width
      values: [1., 1.5, 2., 2.5, 3]
  result: gaussian_with_varied_width

This would create a 2D dataset with a Gaussian with standard values for amplitude and position and the value for the width varied as given.

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

If you would like to control additional parameters of the Gaussian, you can do that as well:

- kind: model
  type: FamilyOfCurves
  from_dataset: dummy
  properties:
    model: Gaussian
    parameters:
      amplitude: 3.
      position: -1
    vary:
      parameter: width
      values: [1., 1.5, 2., 2.5, 3]
  result: gaussian_with_varied_width

Note that if you provide a value for the parameter to be varied in the list of parameters, it will be silently overwritten by the values provided with vary.

Added in version 0.3.

class aspecd.model.Zeros

Bases: Model

Zeros of given shape.

One of the most primitive models: zeros in N dimensions.

This model is quite helpful for creating test datasets, e.g. with added noise (of different colour). Basically, it can be thought of as a wrapper for numpy.zeros(). Its particular strength is that using this model, creating test datasets becomes straight-forward in context of recipe-driven data analysis.

parameters

All parameters necessary for this step.

shapelist

shape of the data

Have in mind that ND datasets get huge very fast. Therefore, it is not the best idea to create an 3D dataset with zeros with 2**12 elements along each dimension.

rangelist

range of each of the axes

Useful if you want to specify the axes values as well.

If the data are multidimensional, one range for each axis needs to be provided.

Type:

dict

Raises:

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Creating a dataset consisting of 2**10 zeros is quite simple:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1024
  result: 1d_zeros

Of course, you are not limited to 1D datasets, and you can easily create ND datasets as well:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: [1024, 256, 256]
  result: 3d_zeros

Please have in mind that the memory of your computer is usually limited and that ND datasets become huge very fast. Hence, creating a 3D array with 2**10 elements along each dimension is most probably not the best idea.

Suppose you not only want to create a dataset with a given shape, but set the axes values (i.e., their range) as well:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1024
      range: [35, 42]
  result: 1d_zeros

This would create a 1D dataset with 1024 values, with the axes values spanning a range from 35 to 42. Of course, the same can be done with ND datasets.

Now, let’s assume that you would want to play around with the different types of (coloured) noise. Therefore, you would want to first create a dataset and afterwards add noise to it:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 8192
  result: 1d_zeros

- kind: processing
  type: Noise
  properties:
    parameters:
      normalise: True

This would create a dataset consisting of 2**14 zeros and add pink (1/f) noise to it that is normalised (has an amplitude of 1). To check that the noise is really 1/f noise, you may look at its power density. See aspecd.analysis.PowerDensitySpectrum for details, including how to even plot both, the power density spectrum and a linear fit together in one figure.

Added in version 0.3.

class aspecd.model.Ones

Bases: Model

Ones of given shape.

One of the most primitive models: ones in N dimensions.

This model is quite helpful for creating test datasets, e.g. with added noise (of different colour). Basically, it can be thought of as a wrapper for numpy.ones(). Its particular strength is that using this model, creating test datasets becomes straight-forward in context of recipe-driven data analysis.

parameters

All parameters necessary for this step.

shapelist

shape of the data

Have in mind that ND datasets get huge very fast. Therefore, it is not the best idea to create an 3D dataset with ones with 2**12 elements along each dimension.

rangelist

range of each of the axes

Useful if you want to specify the axes values as well.

If the data are multidimensional, one range for each axis needs to be provided.

Type:

dict

Raises:

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Creating a dataset consisting of 2**10 ones is quite simple:

- kind: model
  type: Ones
  properties:
    parameters:
      shape: 1024
  result: 1d_ones

Of course, you are not limited to 1D datasets, and you can easily create ND datasets as well:

- kind: model
  type: Ones
  properties:
    parameters:
      shape: [1024, 256, 256]
  result: 3d_ones

Please have in mind that the memory of your computer is usually limited and that ND datasets become huge very fast. Hence, creating a 3D array with 2**10 elements along each dimension is most probably not the best idea.

Suppose you not only want to create a dataset with a given shape, but set the axes values (i.e., their range) as well:

- kind: model
  type: Ones
  properties:
    parameters:
      shape: 1024
      range: [35, 42]
  result: 1d_zeros

This would create a 1D dataset with 1024 values, with the axes values spanning a range from 35 to 42. Of course, the same can be done with ND datasets.

Now, let’s assume that you would want to play around with the different types of (coloured) noise. Therefore, you would want to first create a dataset and afterwards add noise to it:

- kind: model
  type: Ones
  properties:
    parameters:
      shape: 8192
  result: 1d_ones

- kind: processing
  type: Noise
  properties:
    parameters:
      normalise: True

This would create a dataset consisting of 2**14 ones and add pink (1/f) noise to it that is normalised (has an amplitude of 1). To check that the noise is really 1/f noise, you may look at its power density. See aspecd.analysis.PowerDensitySpectrum for details, including how to even plot both, the power density spectrum and a linear fit together in one figure.

Added in version 0.3.

class aspecd.model.Polynomial

Bases: Model

Polynomial.

Evaluate a polynomial with given coefficients for the data provided in aspecd.model.Model.variables.

Note

As the new numpy.polynomial package is used, particularly the numpy.polynomial.polynomial.Polynomial class, the coefficients are given in increasing order, with the first element corresponding to x**0.

Furthermore, the coefficients are assumed to be provided in the unscaled data domain (by using the numpy.polynomial.polynomial.Polynomial.convert() method).

parameters

All parameters necessary for this step.

coefficientslist

coefficients of the polynomial to be evaluated

The number of coefficients determines the order (degree) of the polynomial. The coefficients have to be given in increasing order (see note above). Furthermore, you need to provide the coefficients in the unscaled data domain (using the numpy.polynomial.polynomial.Polynomial.convert() method).

Type:

dict

Raises:

aspecd.exceptions.MissingParameterError – Raised if no coefficients are given

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a Polynomial of first order with a slope of 42 and an intercept of -3. Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your Polynomial:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Polynomial
  from_dataset: dummy
  properties:
    parameters:
      coefficients: [-3, 42]
  result: polynomial

Note that the coefficients are given in increasing order of the exponent, here intercept first, followed by the slope.

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Added in version 0.3.

class aspecd.model.Gaussian

Bases: Model

Generalised Gaussian.

Creates a Gaussian function or Gaussian, with its characteristic symmetric “bell curve” shape.

The underlying mathematical equation may be written as follows:

\[f(x) = a \exp\left(-\frac{(x-b)^2}{2c^2}\right)\]

with \(a\) being the amplitude, \(b\) the position, and \(c\) the width of the Gaussian.

Important

Note that this is a generalised Gaussian where you can set amplitude, position, and width independently. Hence, it is not normalised to an integral of one, and therefore not to be confused with the the probability density function (PDF) of a normally distributed random variable. If you are interested in this, see the aspecd.model.NormalisedGaussian class.

parameters

All parameters necessary for this step.

amplitudefloat

Amplitude or height of the Gaussian

Default: 1

positionfloat

Position (of the maximum) of the Gaussian

Default: 0

widthfloat

Width of the Gaussian

The full width at half maximum (FWHM) is related to the width b by: \(2 \sqrt{2 \log(2)} b\).

Default: 1

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a Gaussian with standard values (amplitude=1, position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your Gaussian:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Gaussian
  from_dataset: dummy
  result: gaussian

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control all three parameters (amplitude, position, width) explicitly:

- kind: model
  type: Gaussian
  properties:
    parameters:
      amplitude: 5
      position: 1.5
      width: 0.5
  from_dataset: dummy
  result: gaussian

This would create a Gaussian with an amplitude (height) of 5, situated at a value of 1.5 at the x axis, and with a width of 0.5.

Added in version 0.3.

class aspecd.model.NormalisedGaussian

Bases: Model

Normalised Gaussian.

Creates a Gaussian function or Gaussian, with its characteristic symmetric “bell curve” shape, normalised to an integral of one. Thus, it is the probability density function (PDF) of a normally distributed random variable.

The underlying mathematical equation may be written as follows:

\[f(x) = \frac{1}{\sigma\sqrt{2\pi}} \exp\left(-\frac{(x-\mu)^2}{ 2\sigma^2}\right)\]

with \(\mu\) being the position and \(\sigma\) the width of the Gaussian, and \(\sigma^2\) the variance.

Note

This class creates a normalised Gaussian, equivalent to the PDF of a normally distributed random variable. If you are interested in a Gaussian where you can set all three parameters (amplitude, position, width) independently, see the aspecd.model.Gaussian class.

parameters

All parameters necessary for this step.

positionfloat

Position (of the maximum) of the Gaussian

For a normally distributed random variable \(x\), the position is identical to its expected value \(E(x)\) or mean \(\mu\). Other names include first moment and average.

Default: 0

widthfloat

Width of the Gaussian

The full width at half maximum (FWHM) is related to the width \(\sigma\) by: \(2 \sqrt{2 \log(2)} \sigma\). The squared value of the width is better known the variance \(\sigma^2\).

Default: 1

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a normalised Gaussian with standard values (position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your normalised Gaussian:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: NormalisedGaussian
  from_dataset: dummy
  result: normalised_gaussian

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control position and width explicitly:

- kind: model
  type: NormalisedGaussian
  properties:
    parameters:
      position: 1.5
      width: 0.5
  from_dataset: dummy
  result: normalised_gaussian

This would create a normalised Gaussian with its maximum situated at a value of 1.5 at the x axis, and with a width of 0.5.

Added in version 0.3.

class aspecd.model.Lorentzian

Bases: Model

Generalised Lorentzian.

Creates a Lorentzian function or Lorentzian often used in spectroscopy, as the line shape of a purely lifetime-broadened spectral line is identical to such a Lorentzian.

The underlying mathematical equation may be written as follows:

\[f(x) = a \left[\frac{c^2}{(x-b)^2 + c^2}\right]\]

with \(a\) being the amplitude, \(b\) the position, and \(c\) the width of the Lorentzian.

Important

Note that this is a generalised Lorentzian where you can set amplitude, position, and width independently. Hence, it is not normalised to an integral of one, and therefore not to be confused with the the probability density function (PDF) of the Cauchy distribution. If you are interested in this, see the aspecd.model.NormalisedLorentzian class.

parameters

All parameters necessary for this step.

amplitudefloat

Amplitude or height of the Lorentzian

Default: 1

positionfloat

Position (of the maximum) of the Lorentzian

Default: 0

widthfloat

Width of the Lorentzian

The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).

Default: 1

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a Lorentzian with standard values (amplitude=1, position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your Lorentzian:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Lorentzian
  from_dataset: dummy
  result: lorentzian

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control all three parameters (amplitude, position, width) explicitly:

- kind: model
  type: Lorentzian
  properties:
    parameters:
      amplitude: 5
      position: 1.5
      width: 0.5
  from_dataset: dummy
  result: lorentzian

This would create a Lorentzian with an amplitude (height) of 5, situated at a value of 1.5 at the x axis, and with a width of 0.5.

Added in version 0.3.

class aspecd.model.NormalisedLorentzian

Bases: Model

Normalised Lorentzian.

Creates a normalised Lorentzian function or Lorentzian with an integral of one, i.e. the probability density function (PDF) of the Cauchy distribution.

The underlying mathematical equation may be written as follows:

\[f(x) = \frac{1}{\pi c} \left[\frac{c^2}{(x-b)^2 + c^2}\right] = \frac{c}{\pi[(x-b)^2 + c^2]}\]

with \(b\) being the position and \(c\) the width of the Lorentzian.

Note

This class creates a normalised Lorentzian, equivalent to the PDF of the Cauchy distribution. If you are interested in a Lorentzian where you can set all three parameters (amplitude, position, width) independently, see the aspecd.model.Lorentzian class.

parameters

All parameters necessary for this step.

positionfloat

Position (of the maximum) of the Lorentzian

Default: 0

widthfloat

Width of the Lorentzian

The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).

Default: 1

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a normalised Lorentzian with standard values (position=0, width=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your Lorentzian:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: NormalisedLorentzian
  from_dataset: dummy
  result: normalised_lorentzian

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control position and width explicitly:

- kind: model
  type: NormalisedLorentzian
  properties:
    parameters:
      position: 1.5
      width: 0.5
  from_dataset: dummy
  result: normalised_lorentzian

This would create a normalised Lorentzian with its maximum situated at a value of 1.5 at the x axis, and with a width of 0.5.

Added in version 0.3.

class aspecd.model.Voigtian

Bases: Model

Voigt profile.

The Voigt profile (after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution (with half-width at half-maximum gamma) and a Gaussian distribution (with standard deviation sigma). It is often used for analyzing spectroscopic data.

In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms: life-time broadening (Lorentzian part) and inhomogeneous broadening (Gaussian part).

If sigma = 0, the PDF of the Cauchy distribution is returned. Conversely, if gamma = 0, the PDF of the Normal distribution is returned. If sigma = gamma = 0`, the return value is ``Inf for x = 0, and 0 for all other x.

Note: Internally, the function scipy.special.voigt_profile() is used to calculate the data.

parameters

All parameters necessary for this step.

positionfloat

Position (of the maximum) of the Voigt profile

Default: 0

sigmafloat

Standard deviation of the Gaussian part

Default: 1

gammafloat

Width of the Lorentzian part

The full width at half maximum (FWHM) is related to the width \(b\) by: \(2b\).

Default: 1

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a Voigt profile with standard values (position=0, gamma=1, sigma=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your Voigtian:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Voigtian
  from_dataset: dummy
  result: voigtian

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control position and widths of Gaussian and Lorentzian contributions explicitly:

- kind: model
  type: Voigtian
  properties:
    parameters:
      position: 1.5
      sigma: 0.5
      gamma: 2
  from_dataset: dummy
  result: voigtian

This would create a Voigt profile with its maximum situated at a value of 1.5 at the x axis, and with a standard deviation of the Gaussian component of 0.5 and a line width of the Lorentzian part of 2.

Added in version 0.10.

class aspecd.model.Sine

Bases: Model

Sine wave.

Creates a sine function with given amplitude, frequency, and phase.

The underlying mathematical equation may be written as follows:

\[f(x) = a \sin(fx + \phi)\]

with \(a\) being the amplitude, \(f\) the frequency, and \(\phi\) the phase of the sine.

parameters

All parameters necessary for this step.

amplitudefloat

Amplitude of the sine.

Note that the real amplitude (max-min) is twice the value given here. Nevertheless, calling this factor “amplitude” seems to be common.

Default: 1

frequencyfloat

Frequency of the sine (in radians).

Default: 1

phasefloat

Phase (i.e., shift) of the sine (in radians).

Setting the phase to \(\pi/2\) would result in a cosine.

Default: 0

Type:

dict

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create a sine with standard values (amplitude=1, frequency=1, shift=0). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your sine:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Sine
  from_dataset: dummy
  result: sine

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control all three parameters (amplitude, frequency, and shift) explicitly:

- kind: model
  type: Sine
  properties:
    parameters:
      amplitude: 42
      frequency: 4.2
      shift: 1.57
  from_dataset: dummy
  result: sine

This would create a sine with an amplitude of 42 (the actual amplitude, defined as max-min, would be twice this value), a frequency of 4.2 and a shift of about pi/2.

Added in version 0.3.

class aspecd.model.Exponential

Bases: Model

Exponential function.

Creates an exponential with given prefactor and rate.

The underlying mathematical equation may be written as follows:

\[f(x) = a \exp(bx)\]

with \(a\) being the prefactor and \(b\) the rate of the exponential.

parameters

All parameters necessary for this step.

prefactorfloat

Intercept of the exponential.

Default: 1

ratefloat

Rate of the exponential.

Default: 1

Type:

dict

Note

In case of modelling exponential decays, the rate constant will become negative. This rate constant (decay rate) is the inverse of the lifetime. Lifetime and half-life are related by a factor of ln(2).

Examples

For convenience, a series of examples in recipe style (for details of the recipe-driven data analysis, see aspecd.tasks) is given below for how to make use of this class. The examples focus each on a single aspect.

Suppose you would want to create an exponential with standard values (prefactor=1, rate=1). Starting from scratch, you need to create a dummy dataset (using, e.g., aspecd.model.Zeros) of given length and axes range. Based on that you can create your exponential:

- kind: model
  type: Zeros
  properties:
    parameters:
      shape: 1001
      range: [-5, 5]
  result: dummy

- kind: model
  type: Exponential
  from_dataset: dummy
  result: exponential

Of course, if you start with an existing dataset (e.g., loaded from some real data), you could use the label to this dataset directly in from_dataset, without needing to create a dummy dataset first.

Of course, you can control all parameters (prefactor, rate) explicitly:

- kind: model
  type: Exponential
  properties:
    parameters:
      prefactor: 42
      rate: 4.2
  from_dataset: dummy
  result: exponential

This would create an exponential with a prefactor of 42 (i.e. the intercept) and a rate of 4.2.

Added in version 0.3.