- runs ./waf build on reload
- more information in engine dump
- after a crash, do not reopen previously opened projects (to avoid entering a crash loop)
- rename nodes via context menu (double click on title was very flaky)
- settings dialog shows engine state and load
- dialog to add N measures to measured tracks (Score and Beat Tracks), or fill them till the end of the project
- fix test sample playback in settings dialog
- fix some class of crashes when restarting the engine
- changing BPM triggers rerendering of samples
- fix size of control dials in mixer node UI
- remove crackling when chaning mixer control values
- fix in sample track crash when changing BPM
- engine state wasn't updated when undoing a node removal
- fix corruption of device list in MIDI Source node when plugging in/unplugging a device
- gracefully handle more crashes while opening/creating projects
- name labels for hidden tracks might become visible
- resize track name labels when renaming tracks
- unittest results might not get written to disk
- separate messages for engine load and state
- EngineState tracker is owned by EditorApp (which also owns the engine client) instead of EditorWindow
- Support importing MP3 and AAC files (decoded with ffmpeg).
- Files are decoded to raw f32 files into the project directory.
- Correctly render stereo files.
- Render chunks of the wave form into "cache tiles".
- Do rendering asynchronously in a background thread.
- Improve test coverage.
- Initial project debugger.
- Random name generator for new projects.
- Don't crash when opening a project fails.
- Fix crash when opening project with deleted node.
- All project files are now in a single directory.
- Single model for instrument library shared across widgets.
- Also add ControlValueDials for a-rate control ports (and make them exposeable).
- Fix exception in UI when removing nodes with connections.
- Fix exception in PluginHost when cleaning up some LV2 plugins.
- Dump audio engine opcode list to log.
Adding new node types required adding bits and pieces in many places, which was tedious and failure
Now it's just cloning the directory of one of the existing nodes and adding a few lines to
'registries' (all in one directory).
- Reanimate custom csound node.
- All nodes can be muted.
- fix a number of newly reported issues with PyQt5 stubs.
- add empty stub files for all 3rdparty libs without stubs.
- remove local copy of protobuf stubs, the version from the official typeshed is now good enough.
- tracks are now just special types of nodes in the pipeline graph.
- completely reimplemented the pipeline graph UI.
- greatly simplified the UI, got rid of all the docks.
- some functionality got lost along the way or hasn't been reimplemented yet.
- Move basic model code into noisicaa.model.
- Change internal property storage to protobufs.
- Commands also use protobufs.
- All serializations uses protobufs.
- Object are managed by a Pool object.
- Object references are stored as object IDs and dereferenced lazily.
- Everything is pylint clean.
- noisicaa.music and most of noisicaa.ui are now mypy strict.
- Generate mypy stubs for generated protobuf code.
- Cleaned up some obsolete cruft.
- Some improvements to runtests.
- Run all audio in a single process.
- Instead of a main audio process connected to the backend and one process per opened project.
- Audio engine is composed of a tree of 'realms', with the top-level realm hooked up to the
backend, and each project creates a sub-realm.
- Run plugins in a separate process, isolated from the main audio engine.
- Turns out I really want to support the LV2 instance access feature, so UIs need to run in the
same process as the plugin itself.
- So plugins need to be isolated from each other, so they can use different UI toolkits without
linker symbol clashes. I.e. they can't all run in the main audio process.
- Separate audio processing from the project process.
- Instead of generating MIDI events from the project and route them to the audio pipeline, have
processors in the pipeline, which capture the full state of the project and emit events during
playback. Changes to the project update those processors to keep them in sync.
- Use protocol buffers for a lot of internal messages in non-realtime contexts.
- Very basic support for LV2 plugin UIs.
- Trying to get that working triggered all those changes above...
- And many, many more changes...
- Improve time management:
- Type-safe C++ and Python classes for musical times and durations.
- Everything work in the musical time domain until the very end. During playback each
sample is mapped in an efficient way to a musical time to figure out what needs to play.
- Make variable names, etc. more consistent.
- Remove the AudioStreamProxy:
- Move everything needed for playback into the backend.
- The player logic (playing or not, looping, what musical time to play for the next
- Contents of each track in an efficient-to-play format (score and beat tracks as
- Project updates the backend (of every active player) whenever track contents change,
thus keeping the backend up-to-date.
- Employs lock-free data structures where applicable.
- This converts a large chunk of code in the critical path from Python to C++.
- Introduce protocol buffers for messages in non-realtime contexts.