You're reading an old version of this documentation. For up-to-date information, please have a look at v0.11.

Roadmap

A few ideas how to develop the project further, currently a list as a reminder for the main developers themselves, in no particular order, though with a tendency to list more important aspects first:

For next releases

  • Plotting

  • Processing

    • DatasetCombination: combine data from several datasets into a single dataset; parameters allowing to define the axis values/quantity/unit, possibly even from given metadata; to decide: How to handle metadata that might be invalidated?

    • MetadataUpdate/MetadataChange: Change metadata of a given dataset from within a recipe. Useful in case datasets contain (known) spurious or otherwise inappropriate metadata. (Metadata are provided manually and are therefore prone to human errors).

    • aspecd.processing.BaselineCorrection with fit_area definable as axis range, and arbitrary parts of the axis (e.g., in the middle of a dataset or with separate fit areas)

  • Recipe-driven data analysis:

    • Better handling of automatically generated filenames for saving plots and reports: unique filenames; using the label rather than the source (id) of the dataset

    • If figure is plotted twice using automatically generated filenames, use different filenames (e.g. increment number).

    Points towards reworking the aspecd.plotting.Saver class, allowing for an additional optional parameter suffix or else. Would make handling too long filenames easier as well.

    • Handling of results: automatically add datasets to dataset list? How to deal with result labels identical to existing datasets?

    • Sub-recipes that can be included in recipes

      Allows for creating a library of recipes for rather complex tasks that can simply be called as single step from another recipe

    • Static (syntax) checker for recipes prior to their execution

  • Report task:

    • Operating on recipes, i.e. report on all tasks in a recipe

    • Adding arbitrary dict representations of properties of datasets/results to context

  • Reports:

    • Looking for templates in user directory

  • Documentation:

    • More developer documentation providing hints and “best practices” for how to develop classes either in ASpecD or in derived packages.

    • How to debug a recipe?

    • Better document command-line options of the “serve” command

For later versions

  • Convert from collections.OrderedDict to dict, as starting with Python 3.7, dicts preserve the insertion-order of the keys.

  • Plot styles

    • user-defined styles

  • Annotations

    • graphical annotations for characteristic points (and distances, areas?)

  • Remaining basic processing and analysis steps:

    • denoising (via SVD or similar)

    • SNREstimation with explicitly providing noise (using both, processing and analysis)

  • Interpolation

    • different types of interpolation

  • Templates for creating derived packages

  • Plotter: Factory to create single plots of each given dataset.

  • Basic maths in values of recipes (ranges, basic numpy functions)?

    May impair the platform-independence of the recipe (i.e., tying it to Python/NumPy)

Todos

A list of todos, extracted from the code and documentation itself, and only meant as convenience for the main developers. Ideally, this list will be empty at some point.

Todo

Flesh out these additional DatasetAnnotation classes, particularly in light of the newly created PlotAnnotation classes that may eventually be a way to graphically display the dataset annotations.

original entry

Todo

Clarifly the difference between the HistoryRecord and Record classes, and explain which is used when and how.

original entry

Todo

How to handle noisy data in case of area normalisation, as this would probably account for double the noise if simply taking the absolute?

original entry

Todo

  • Make type of interpolation controllable

original entry

Todo

  • Make type of interpolation controllable

  • Make number of points controllable (in absolute numbers as well as minimum and maximum points with respect to datasets)

original entry

Todo

While generally, recipe-driven data analysis works well in practice, improving usability and robustness is high on the list. This includes ( but may not be limited to) a parser for recipes performing a static analysis of their syntax and is particularly useful for larger datasets and/or longer lists of tasks. Potential steps in this direction:

  • Add check() method to aspecd.tasks.Task

  • Define required parameters in a (private) attribute of either the individual task level or even on the level of the underlying objects

  • Potentially reuse the _sanitise_parameters() method.

original entry

Todo

Can recipes have LOIs themselves and therefore be retrieved from the extended data safe? Might be a sensible option, although generic (and at the same time unique) LOIs for recipes are much harder to create than LOIs for datasets and alike.

Generally, the concept of a LOI is nothing a recipe needs to know about. But it does know about an ID of any kind. Whether this ID is a (local) path or a LOI doesn’t matter. Somewhere in the ASpecD framework there may exist a resolver (factory) for handling IDs of any kind and eventually retrieving the respective information.

original entry