You're reading an old version of this documentation. For up-to-date information, please have a look at v0.8.


A few ideas how to develop the project further, currently a list as a reminder for the main developers themselves, in no particular order, though with a tendency to list more important aspects first:

For version 0.7

  • Recipe-driven data analysis:

    • Better handling of automatically generated filenames for saving plots and reports: unique filenames; using the label rather than the source (id) of the dataset

    • Handling of results: automatically add datasets to dataset list? How to deal with result labels identical to existing datasets?

  • Report task:

    • Operating on recipes, i.e. report on all tasks in a recipe

    • Adding arbitrary dict representations of properties of datasets/results to context

  • Recipes: Subrecipes that can be included in recipes

  • Reports:

    • LaTeXFigureReporter for generating LaTeX code to include figures in documents

    • Looking for templates in user directory

  • Processing of 2D (eventually ND with N>1) datasets:

    • Projecting/averaging excluding certain lines (due to artifacts from external noise sources or else)

    • Combining a list of 1D datasets to a 2D dataset (reverse operation of SliceExtraction)

For later versions

  • Annotations

    • graphical annotations for characteristic points (and distances, areas?)

  • Remaining basic processing and analysis steps:

    • denoising (via SVD or similar)

    • SNREstimation with explicitly providing noise (using both, processing and analysis)

  • Interpolation

    • for ND with N>2

    • different types of interpolation

  • Templates for creating derived packages

  • Plotter: Factory to create single plots of each given dataset.

  • Basic maths in values of recipes (ranges, basic numpy functions)?

    May impair the platform-independence of the recipe (i.e., tying it to Python/NumPy)

  • Convert from collections.OrderedDict to dict, as starting with Python 3.7, dicts preserve the insertion-order of the keys.


A list of todos, extracted from the code and documentation itself, and only meant as convenience for the main developers. Ideally, this list will be empty at some point.


How to handle noisy data in case of area normalisation, as this would probably account for double the noise if simply taking the absolute?

original entry


  • Make type of interpolation controllable

  • Check for ways to make it work with ND, N>2

original entry


  • Make type of interpolation controllable

  • Make number of points controllable (in absolute numbers as well as minimum and maximum points with respect to datasets)

original entry


There is a number of things that are not yet implemented, but highly recommended for a working recipe-driven data analysis that follows good practice for reproducible research. This includes (but may not be limited to):

  • Parser for recipes performing a static analysis of their syntax. Useful particularly for larger datasets and/or longer lists of tasks.

original entry


Can recipes have LOIs themselves and therefore be retrieved from the extended data safe? Might be a sensible option, although generic (and at the same time unique) LOIs for recipes are much harder to create than LOIs for datasets and alike.

Generally, the concept of a LOI is nothing a recipe needs to know about. But it does know about an ID of any kind. Whether this ID is a (local) path or a LOI doesn’t matter. Somewhere in the ASpecD framework there may exist a resolver (factory) for handling IDs of any kind and eventually retrieving the respective information.

original entry