You're reading the documentation for a development version. For the latest released version, please have a look at v0.3.

A few ideas how to develop the project further, currently a list as a reminder for the main developers themselves, in no particular order, though with a tendency to list more important aspects first:

## For version 0.4¶

• Remaining basic processing and analysis steps:

• denoising (via SVD or similar)

• SNREstimation with explicitly providing noise (using both, processing and analysis)

• References in processing and analysis steps?

• May be useful to automatically create a list of references to cite (report) depending on the methods actually used.

• May need an implementation of some kind of bibliographic record

• Reporter: Method for adding dict representations of datasets to context

• Add figure captions to context if available

• Operating on recipes, i.e. report on all tasks in a recipe

• Adding arbitrary dict representations of properties of datasets/results to context

• Default report templates for each type of processing/analysis task

Includes deciding where to store these templates, whether to have them stored in different directories for different languages, and alike. Ideally, templates should be copied to a user-accessible directory for modifying there.

• Expand use cases: reports

• Recipes: Subrecipes that can be included in recipes

• Interfacing towards fitting/simulation frameworks

• Basic maths in values of recipes (ranges, basic numpy functions)?

May impair the platform-independence of the recipe (i.e., tying it to Python/NumPy)

## For later versions¶

• Interpolation

• for ND with N>2

• different types of interpolation

• Templates for creating derived packages

• Logging

• Tabular representations of characteristics extracted from datasets

• Plotter: Factory to create single plots of each given dataset.

## Todos¶

A list of todos, extracted from the code and documentation itself, and only meant as convenience for the main developers. Ideally, this list will be empty at some point.

Todo

How to handle noisy data in case of area normalisation, as this would probably account for double the noise if simply taking the absolute?

original entry

Todo

• Make type of interpolation controllable

• Check for ways to make it work with ND, N>2

original entry

Todo

• Make type of interpolation controllable

• Make number of points controllable (in absolute numbers as well as minimum and maximum points with respect to datasets)

original entry

Todo

Describe the settings in more detail, thus providing users of this class and in turn the aspecd.report.Reporter class with ideas of how to create their templates.

original entry

Todo

There is a number of things that are not yet implemented, but highly recommended for a working recipe-driven data analysis that follows good practice for reproducible research. This includes (but may not be limited to):

• Parser for recipes performing a static analysis of their syntax. Useful particularly for larger datasets and/or longer lists of tasks.

original entry

Todo

Can recipes have LOIs themselves and therefore be retrieved from the extended data safe? Might be a sensible option, although generic (and at the same time unique) LOIs for recipes are much harder to create than LOIs for datasets and alike.

Generally, the concept of a LOI is nothing a recipe needs to know about. But it does know about an ID of any kind. Whether this ID is a (local) path or a LOI doesn’t matter. Somewhere in the ASpecD framework there may exist a resolver (factory) for handling IDs of any kind and eventually retrieving the respective information.

original entry

Todo