A few ideas how to develop the project further, currently a list as a reminder for the main developers themselves, in no particular order, though with a tendency to list more important aspects first:

For version 0.1

  • Plotter: stackplot for SinglePlot2D and MultiPlot

  • Recipes: Saving additional infos for datasets in history

  • Recipes: Create filenames for saving figures/… from patterns/placeholders available from the respective dataset (e.g., source basename)

  • Recipes: Add optional default output directory

  • Report task: Add figure captions to context if available

  • Metadata mapper via yaml file

  • Documentation: Use cases

  • Documentation: YAML representation of dataset structure(s) (automatically generated together with the documentation and using the literalinclude directive)

For later versions

  • Remaining basic processing and analysis steps, such as baseline correction, algebra with datasets, slice extraction for >2D datasets, averaging (over parts of axis) for N-D datasets, peak finding, SNR determination, denoising, filtering, noise

  • Reporter: Method for adding dict representations of datasets to context

  • Reporter task: Operating on recipes, i.e. report on all tasks in a recipe

  • Reporter task: Adding arbitrary dict representations of properties of datasets/results to context

  • Templates for creating derived packages

  • Default report templates for each type of processing/analysis task

    Includes deciding where to store these templates, whether to have them stored in different directories for different languages, and alike. Ideally, templates should be copied to a user-accessible directory for modifying there.

  • Logging

  • Tabular representations of characteristics extracted from datasets


A list of todos, extracted from the code and documentation itself, and only meant as convenience for the main developers. Ideally, this list will be empty at some point.


There will be a number of models provided here, including, but probably not limited to: polynomials, exponentials, sine, Gaussian, Lorentzian

original entry


Handle noisy data, at least for normalising to maximum, minimum, and amplitude.

original entry


How to handle noisy data in case of area normalisation, as this would probably account for double the noise if simply taking the absolute?

original entry


Describe the settings in more detail, thus providing users of this class and in turn the class with ideas of how to create their templates.

original entry


There is a number of things that are not yet implemented, but highly recommended for a working recipe-driven data analysis that follows good practice for reproducible research. This includes (but may not be limited to):

  • Parser for recipes performing a static analysis of their syntax. Useful particularly for larger datasets and/or longer lists of tasks.

original entry


Can recipes have LOIs themselves and therefore be retrieved from the extended data safe? Might be a sensible option, although generic (and at the same time unique) LOIs for recipes are much harder to create than LOIs for datasets and alike.

Generally, the concept of a LOI is nothing a recipe needs to know about. But it does know about an ID of any kind. Whether this ID is a (local) path or a LOI doesn’t matter. Somewhere in the ASpecD framework there may exist a resolver (factory) for handling IDs of any kind and eventually retrieving the respective information.

original entry