Reanimate website sources.

main
Ben Niemann 2021-12-27 02:39:53 +01:00
parent 9181ae4118
commit b6e8ffe564
65 changed files with 2671 additions and 2 deletions

3
.gitignore vendored
View File

@ -8,3 +8,6 @@
/flatpy/flatpy.egg-info/
/flatpy/.env/
/playground/
/website/.venv/
/website/_build/
/website/_serve/

17
bin/odasite 100755
View File

@ -0,0 +1,17 @@
#!/bin/bash
set -e
ROOT="$(realpath "$(dirname "$0")"/..)"
export PYTHONPATH=
if [ ! -d website/.venv ]; then
/usr/bin/python3 -m venv "${ROOT}/website/.venv"
touch "${ROOT}/website/.venv/.nobackup"
"${ROOT}/website/.venv/bin/python" -m pip install -U pip setuptools wheel
"${ROOT}/website/.venv/bin/python" -m pip install -e ~/projects/odasite
fi
export DIFF=$(which meld)
"${ROOT}/website/.venv/bin/odasite" -c website/ "$@"

View File

@ -25,9 +25,8 @@ let
};
in nixpkgs.mkShell {
packages = [
nixpkgs.meld
nixpkgs.meterbridge
nixpkgs.qt5.qtdeclarative.bin
nixpkgs.qt5.qtdeclarative.dev
];
inputsFrom = [ noisicaa ];

19
website/README.md 100644
View File

@ -0,0 +1,19 @@
This is the source code for the http://noisicaa.odahoda.de/ website.
Note that it's using a static site generator called `odasite`, which is currently not publically
available, so the following instructions only work for myself.
# Editing
```bash
bin/odasite serve
```
View website at http://localhost:8000/
# Upload
```bash
bin/odasite remote-diff # Review changes
bin/odasite upload
```

View File

@ -0,0 +1,25 @@
Title: Sins of my Youth
Date: 2015-12-03
I mentioned the [Deluxe Music Construction
Set](https://en.wikipedia.org/wiki/Deluxe_Music_Construction_Set) before. I used that mostly to
transcribe sheet music, which I found in the public library. Some classical pieces,
e.g. https://youtu.be/UqOOZux5sPE - because that movie is awesome, and if I remember correctly some
of the older Genesis songs. I can't remember if I actually tried to write my own music with it -
whatever files I saved back then are gone forever.
But I also played around with "trackers" - I think it was mostly
[NoiseTracker](https://en.wikipedia.org/wiki/NoiseTracker) and
[OctaMED](https://en.wikipedia.org/wiki/OctaMED). And strangely enough, even after ~25 years I still
have those files sitting on my harddisk. And there's even software that can play them back or
convert them to slightly more modern formats.
Just for the giggles, I uploaded some of those to
[SoundCloud](https://soundcloud.com/odahoda/sets/sins-of-my-youth):
<iframe frameborder="no" height="450" width="50%" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/playlists/171468282&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>
I can't claim that my artistic skills have changed, let alone improved, in the meantime.
But I'm probably a better coder now.

View File

@ -0,0 +1,7 @@
Title: Initial commit on GitHub
Date: 2015-11-29
I've been working on this project for a while already, but only now made it public on GitHub. It's
not yet in a state where it's usable though - that'll still take a while.
And I guess I should also describe what this is about at some point...

View File

@ -0,0 +1,106 @@
Title: Introducing noisicaä - A simple music editor
Date: 2015-11-29
I don't know much about music, but it's a topic that interests me.
I don't play any instruments and I'm too old or too lazy to learn one. Probably both.
But I'm one of those computer persons, so I would naturally use a computer to make music - or at
least attempt to.
So I looked around what software is out there, but the stuff I found didn't get me terribly
excited. Now I have to add that I'm using Linux (Kubuntu on my desktop to be more specific) and
while I also have Windows on my machine, I wouldn't want to use it - except as a bootloader for
Steam. I also didn't look at commercial software, just what's out there in the open source
world. That limits the selection. And the stuff that actually exists in that small subsection of the
market is either too simple or overwhelmed me a ton of options and possibilities that I don't know
how to use, because I don't actually know what that stuff means.
On the other hand I still remember playing around with the [Deluxe Music Construction
Set](https://en.wikipedia.org/wiki/Deluxe_Music_Construction_Set) on my Amiga back in the days. And
that level of complexity (or the lack thereof) feels like exactly the thing that would be
appropriate for me today.
So because I couldn't find any existing software that I liked, I started writing my own. Also
writing code is fun, so I don't really need a reason.
I'm also using this opportunity to play with some fancy tools, which I haven't used yet: Python3
(it's about time), Cython (sounded like a cool toy, but I never had a reason to use it before) and
Qt5 (I've been using wx for UIs in the past, but I like Qt way more now).
I'm also giving git another try. Well, we'll see. That has been quite a lackluster experience in the
past.
### What's different about noisicaä?
noisicaä uses classical musical notation. Putting notes onto staves, clefs, measures. etc. Most
"modern" software seems to focus on a sequencer style interface. My little bit of knowledge about
music is stuck in the time when I had music in school, and continuing from that point seems to be
easiest for me. Perhaps later I'll figure out that the piano roll UI of sequencers is indeed more
powerful than the system that been used by Bach and Mozart back in the days. But the software can
make that transition together with me, as I acquire more skills.
Another thing that seems popular, especially in the realm of commercial software (which I can only
tell from seeing random screenshots on random websites, since I never used any of these), is that
the UI simulates existing hardware. That might be nice for people with existing experience in making
music using real instruments and studio hardware. But for me that's just another obstacle, because
in addition to learning about the musical domain itself, I would also have to deal with a user
interface that doesn't follow known guidelines. At least known to someone who never set foot in a
music studio, but worked with computer for the past 25 years.
So noisicaä's UI rather follows established schemes that you would also find in IDE, office suites,
and the likes.
### Features
Not a lot yet. This is still pre-alpha state software.
There's some basic editing and you can play it back. The instruments are currently rendered using
[FluidSynth](http://www.fluidsynth.org/), i.e. it uses soundfonts. I have vague plans to
also support plain .wav files as samples, or use synthesizers to create
instruments. [Csound](https://csound.com/) seems like the kind of arcane but powerful system
that I could like a lot.
Of course the music should eventually be rendered as .flac, .ogg, etc. files, but that doesn't exist
yet.
There should be some support for filters and general mixing/production features, so I get a little
bit more than the typical MIDI
sounds. [LADSPA](https://en.wikipedia.org/wiki/LADSPA)/[LV2](http://lv2plug.in/) plugins and again
Csound seem to be the right tools for that.
I'll also do something with MIDI input. There's a little MIDI controller here on my desk, that wants
to be entertained. But probably not real-time recording, because I could play anything in real-time
that's worth recording.
And in the further future I'm also thinking about some hybrid system which mixes composed tracks
with recorded tracks - where the source of the recording will probably be my wife playing the guitar
or bass.
And who knows what. That'll depend a lot on where my learning curve takes me. I will use this
project to implement those things that I learn about music, so I can apply them.
### What, music software in Python?
That's right. People seem to make a big fuzz about latency, real-time support in the kernel and that
kind of stuff. They probably know better than I, but I'll ignore that advice anyway. Most of the
coding is around the UI, persistence, etc. and not involved in the playback at all. And despite not
having spent a lot of time into optimizing anything, I haven't observed any buffer underruns during
playback yet. Perhaps using a totally overpowered desktop is just good enough (though it's already a
few years since that machine was "high end").
Also I don't see noisicaä as a tool to be used on stage for live performances. So occasional
glitches do not worry me a lot.
And then there's [Cython](http://cython.org/), which I started using mostly for interacting with C
libraries. I already use it for some of the bit crunching parts - not because those already needed
optimization, but just because I wanted to play with it a bit more (aka. premature optimization).
### So... you want to make music?
That's the weak spot of this whole enterprise. I seriously doubt that I have the creativity and
artistic skills to produce something that could actually be called music. Or warrant all the effort
I'm now putting into this. Even after I learned all the theory that there is to learn and created
the greatest software that any artist could dream of.
Bah, who cares...

View File

@ -0,0 +1,6 @@
Title: Photos, or it didn't happen!
Date: 2015-12-03
Just to prove that this thing actually exists:
[[img:2015-12-03-photos.png]]

View File

@ -0,0 +1,38 @@
Title: LV2 confusion with internal presets
Date: 2018-03-04
I managed to get the [Helm](http://tytel.org/helm/) synth working as an LV2 in noisicaä, including
UI. Which is great, because that's a very nice synth.
But I ran into an issue, which caused me some head scratching, because I thought that I was doing
something wrong. As it turned out, this is really a problem with (at least) LV2 and Helm's feature
to load presets from within its own UI.
Helm's internal state is determined by the values of the 100+ control input ports, plus some
additional state that isn't captured by those ports (e.g. which controls are modulated by an LFO).
Now when you load a preset, Helm sets all of its control to specific values, but it has no way to
tell the host about those values. Which means the host is still feeding the previous values into all
control input ports. Helm seems to have some logic to ignore those values from the host, until you
actually change them from outside of Helm (e.g. via automation or the generic UI). If you change
some control manually in Helm's UI, then the values are posted from the UI to the host, so it can
update the value that it feeds into the respective control port.
But if you now close the project and open it again, Helm's state gets all messed up, because for
most control the host never learned about the right value from the preset that you selected, and
Helm gets initialized with effectively random values. Things get even more confusing, because the
control values are also stored in Helm's '[internal
state](http://lv2plug.in/ns/ext/state/state.html)' blob and depending on the order in which control
ports and internal state are initialized you get different results.
Turns out that [Ardour](https://ardour.org/) has the same issue, so I'm not alone with it. I don't
know, if e.g. the VST API solves this issue, or if it was generally a bad idea to have presets,
which change the values of control ports - and manage them within the plugin itself. If the presets
would be handled in the UI, then LV2 has the API to set the control ports... or is it just a bug in
Helm, that this API isn't used when switching presets?
So the lesson is to not touch the preset browser in Helm, but instead use the preset menu from the
host - which has the same list of presets, and knows how to set the control port correctly when
loading a preset.
But noisicaä doesn't know about presets yet...

View File

@ -0,0 +1,27 @@
Title: The nature of mixing
Date: 2018-07-04
Just a little thought in between, while I'm writing a longer post...
The concept of mixing audio, applying filters, processors and whatnot to perfect and polish the
sound is actually quite artificial, at least to the extend that it's being done these days.
If you think about how music was performed for most of humanities existence, all you had were the
"raw" sounds of the performers. There were some limited ways to tweaks those, like "hey you, don't
hit those drums so hard" or "you, come to the front, so we can hear you better", but that's about
it.
And while people for sure noticed the difference that the environment had on the experience, if they
didn't have a cathedral at hand that would give the chorus that impressive volume, they would still
enjoy the performance in a normal room.
The ability to micromanage sounds, in the way we can do it now in the studio, feels like a
qualitative step beyond what we used to be able to do in the past. Like going from some salt and the
herbs that grow in your backyard for spicing your food, to glutamat and a whole catalog of
E-something chemicals used for industrialized food production.
Yes, those tools that we have today are great, and it's even greater that they're now available to
anyone with a computer. But be careful and conscious about how you apply them.
At least I, when I make a pizza myself at home, don't want it to taste like a frozen pizza out of
some factory.

View File

@ -0,0 +1,62 @@
Title: Development update (January 10)
Date: 2019-01-10
Development made had a small hiatus in second half of 2018, but in
December - thanks to a lot of spare time - I picked coding up again.
The biggest and most visible change was that I completely rethought
the general idea behind noisicaä. In the beginning I started out with
something that should resemble a piece of paper with staves on it,
that lets you easily create, edit and play music, and I only
envisioned basic audio processing features. Over time that turned into
something much more like the typical DAW with tracks and plugins and
all that stuff. But I wasn't very happy about the way DAWs typically
manage the audio processing. In my mind manipulating the actual
processing graph feels like the most intuitive way, something like
[Pure Data](https://puredata.info/) or a modular synth. Which also
happens to be the way the engine works internally. I didn't quite know
how to make this properly accessible, so I just had something basic in
place.
At some point I came across a [video
demo](https://www.youtube.com/watch?v=pMXhnBANiMA) for
[BespokeSynth](https://github.com/awwbees/BespokeSynth) and I really
liked the general approach of that UI. Eventually I started thinking
more about what noisicaä should actually be. The primary use case has
always been musical composition, and that doesn't change. But most
apps have some kind of metaphor to the real world that they're built
upon (e.g. sheet music for [MuseScore](https://musescore.org/),
recording studio for [Ardour](https://ardour.org/), etc.). And I
simply decided that noisicaä's foundational idea should be the modular
synth.
So made did a major refactoring, putting the processing graph onto the
center stage, both in terms of internal code structure, as well as on
the UI. There are still tracks, and they are prominently featured on
the UI, because time and temporal editing remains to be a central
ingredient. But e.g. now you add a 'Score Track' node to the
processing graph, which will also add an editable track, instead of
the other way around.
Another major change during the past weeks was that I finally got
around to purge all Python from the critical path in the audio
thread. This is now all C++ and hopefully realtime safe. Now playback
even at smaller block sizes is much more reliable.
Finally on the more organizational side, I adopted a scheme of doing
"sprints". Well... I know that there is some development methodology,
which uses the term "sprint", but I don't even know which one that is,
what it exactly means by that word, nor do I intend to adopt any
methodology or become serious about project management or anything
evil like that. But it seems to work quite well to structure that
gigantic pile of ideas, which I want to implement, into reasonably
sized chunks, and then spend a week or two tick off all the boxes for
one of those chunks. And I just call those chunks "sprints".
And maybe, just maybe... if every other week or so, when I merged
all the changes for a completed sprint into the master branch, and
have that nice feeling of accomplishment... maybe that also makes me
want to write a post with a short update of what I did. Otherwise this
blogging thing simply doesn't seem to work for me, as you can tell
from the frequency of postings.

View File

@ -0,0 +1,28 @@
Title: Development update (January 14)
Date: 2019-01-14
I just finished a "MIDI in" sprint and [merged it into the master
branch](https://github.com/odahoda/noisicaa/commit/f4ce7f53d2f031967d07d48fbaf279e3aac5332b).
Oops, I just noticed that I forgot to remove the commit messages from the branch. I usually to a
`git merge --squash [branch]`, so there's just one big commit to the master branch, instead of
dozens of small ones, and also write a commit message summarizing all the changes. git helpfully
prepopulates the commit message with all the individual commit messages from the branch, which I
then just remove. Except for today...
### What's new
There is now a "MIDI source" node, which can be used to feed MIDI events from any ALSA device, e.g. you USB keyboard, into the graph.
[[img:2019-01-14-development-update.png]]
There used to be some similar functionality written in Python, but that lived on the UI side and
MIDI events were fed to the backend via RPCs. And that implementation gradually fell apart, as I was
refactoring things over the past month. So the only place where it was used (the onscreen keyboard
in the instrument library, which could be connected to a real MIDI keyboard) didn't even work
anymore.
The new implementation is pure C++ and MIDI events are collected directly in the audio
thread. Of course except for that onscreen keyboard, which still has to send it's MIDI events via
RPC to the engine. But that's now done using the same mechanism for communicating with node
processors, which is also used for other things.

View File

@ -0,0 +1,26 @@
Title: Modular DAWs
Date: 2019-01-25
I recently came across those two announcements:
* [Loomer Architect: A modular MIDI
toolkit](https://www.kvraudio.com/forum/viewtopic.php?f=141&t=517104) (via
[LinuxMusicians](https://linuxmusicians.com/))
* [Bitwig Studio is about to deliver on a fully modular core in a
DAW](http://cdm.link/2019/01/bitwig-studio-3-modular/) (via
[planet.linuxaudio.org](http://planet.linuxaudio.org/))
I only read those articles and looked at the pretty pictures, but haven't tried out any of
those. But judging from that superficial glance, both these projects seem to move into the same
direction as noisicaä. The Bitwig article also lists some more software with a similar approach. So
it appears that my idea for a modular DAW isn't that unique, possibly rather old, and perhaps some
recent trend thing (the last two aren't necessarily a contradiction).
I'm trying to pay not too much attention to other audio software, which includes *not* trying that
stuff out myself. This is partly to keep myself in blissfully ignorant that what I'm doing has
already been done before (and probably better), i.e. a waste of time. But also to keep a fresh
perspective and possibly create something new, instead of just cloning what already exist - assuming
that I actually have any new ideas, which is doubtful. And finally, if I'd get too much inspiration
from other projects, I would just end up with an unmanageable pile of ideas of all those great
things I should add to noisicaä, and would never get done. But I do keep a list of bookmarks of
stuff that I find along the way, including related or similar software projects.

View File

@ -0,0 +1,42 @@
Title: Development update (January 27)
Date: 2019-01-27
Just [merged the
branch](https://github.com/odahoda/noisicaa/commit/79b35d9689eccd260c854d32f1ebcc9ec2473603) for a
"command cleanup" sprint. This was mostly an internal cleanup with just a few user visible changes.
### What's new
Now some sequences of commands are merged into a single command.
Every time you make a change to the project, a command is issues for it and (in broad terms) this
command is appened to the "undo list". Turning a dial could produce dozens if not hundreds of
separate commands, and an 'undo' would undo each of those one at a time.
Now new commands might get merged into the previous command, so a sequence of many control value
changes is collapsed into a single change. And 'undo' will undo all those changes in one step.
It is also possible again to set the pitch for the note events produced by a beat track. That got
lost in some previous refactoring.
### Internal changes
The major part of the sprint was to cleanup the commands, which are internally used by the UI to
modify the opened project. Initially I used very fine grained commands, e.g. a dedicated command
`SetClef` to change the clef of a measure. The whole thing grew quite organically and I never
developed a consistend scheme.
Now I changed that to a basic [CRUD
scheme](https://en.wikipedia.org/wiki/Create,_read,_update_and_delete), i.e. most commands are
called `CreateFoo`, `UpdateFoo`, `DeleteFoo` (ok, it's really `CUD`, but whatever). And instead of
separate commands to change the various properties of an object (like the `SetClef` mentioned
above), there is now just a single `UpdateFoo` command, and which property should be changed is
determined by the presence of specific fields in that command (which are just [protocol buffer
messages](https://developers.google.com/protocol-buffers/)).
While doing those changes I also got tired of typing names like `PipelineGraphNode`. Given the
central role, which the graph and the nodes therein should now play, I renamed those to the much
more compact `Node`, `NodeConnection`, etc.
And some minor cleanups: simplify the API of the `Command` class, add factory functions for all
command protos (for some more type safety).

View File

@ -0,0 +1,29 @@
Title: Development update (March 3)
Date: 2019-03-03
Just [merged the
branch](https://github.com/odahoda/noisicaa/commit/da16c0626ab6a5b327a4927528c95718d3c1b3bb) for the
"Proto IPC" sprint. This was again mostly an internal cleanup with no significant user visible
changes.
### Internal changes
The internal communication between processes now uses (almost exclusively) protocol buffers instead
of pickled Python objects. The IPC code has also been optimized a bit to make it a faster.
### WTF am I doing here?
So I started out with the ambition to make music. Instead of, you know, pick up some existing
software and just go ahead, I decided to write my own. As if that would be more efficient in any way
(but it's fun and enjoyable for me).
And now, instead of, you know, adding musical features to it, I spend my time working on completely
unrelated infrastructure stuff, for which there is probably some existing library that does a better
job than me (but it's fun and enjoyable for me).
I guess you can call that "productive procrastination", though [existing
usages](https://medium.com/@protoio/why-productive-procrastination-can-be-beneficial-6540e95459f9)
of that term seem to be see it in a more favorable view that I would do. So I would like to get more
focused on what I really want and at least work more on the musical features of noisicaä. The
ambition to eventually make music with it shouldn't totally disappear. But I won't be able to
completely suppress my urge to give the code base some polish from time to time...

View File

@ -0,0 +1,33 @@
Title: Development update (March 24)
Date: 2019-03-24
I got a little bit distracted by reading too much about Emacs and tweaking my config for it, so
noisicaä had a little downtime of about three weeks. But to get things rolling about, I had a little
"Expose Ports" sprint, which I just
[merged](https://github.com/odahoda/noisicaa/commit/6d52afb424e277c9884228d99fb88d98ddce5237) into
master.
### What's new
Control ports, which previously could only be set interactively to a fixed value, can now be
"exposed" as normal ports, so you can connect them to other nodes. Basically what other DAWs call
"automation"...
The UI is still suboptimal - it's just a nameless checkbox next to the dial. This also only works
for the generic node UI so far. Other built-in nodes with a custom UI should also get that feature,
but I'll have to think about the UI design. Perhaps make this available via a popup menu on all
controls?
Also a-rate control ports, can now be used like k-rate control ports, i.e. either set the value via
a UI dial or expose and connect them to other nodes.
### Bug fixes
* Fixed a crash when removing node with connections.
* Fixed a crash when removing some LV2 plugin nodes, where there is some issue with the [State
extension](http://lv2plug.in/ns/ext/state/). The issue still exists (a traceback is dumped to the
log and state collection stops), but doesn't cause a crash anymore.
### Internal changes
* Dump audio engine opcodes to the log (via the "Dev" menu).

View File

@ -0,0 +1,28 @@
Title: Development update (April 6)
Date: 2019-04-06
This [commit](https://github.com/odahoda/noisicaa/commit/c3e5c3a88dcf7cfac5f13cd67601be0b8c2fe3a5)
adds the capability to have nodes with dynamic ports, i.e. the list and types of ports are not
static anymore, but can change during the lifetime of the node. For now this is only used by one
builtin node type, but I plan to make more use of that feature in the future. E.g. to have a single
"VCA" node with a flexible number of channels, instead of needing a number of separate "VCA (mono)",
"VCA (stereo)" nodes.
### What's new
The Custom CSound node now has a dynamic port list. Ports can be added, removed and modified.
### Internal changes
* The NodeDescription of a node can be generated dynamically. Changes are propagated to the audio
engine and for some changes, e.g. port type or direction changes, the underlying processor
instance gets reinitialized (which does cause a brief audio glitch, if the processor is hooked up
and producing sound).
* Added node parameters, which are persistent in the audio engine graph, but not part of the
project. Used for setting the CSound orchestra and score of the Custom CSound node. Processors
already had the concept of parameters, which has been assimilated into the more generic node
parameters.
* A new ObjectListEditor class, which provides a table widget for editing lists of
objects. Currently only used for the port list editor of the Custom CSound node.

View File

@ -0,0 +1,62 @@
Title: Development update (April 25)
Date: 2019-04-25
The latest
[commit](https://github.com/odahoda/noisicaa/commit/5ca602b9e41e0cbfaa38f528a6a756e3aab7d803)
primarily adds a handful of new builtin nodes.
This "MIDI CC to CV" node actually poses an interesting problem. It is inherent to MIDI controllers
that only changes of the controller positions are communicated, which implies that on startup you
don't know what the current position of the controller is until the first change event comes in.
You could use the control values coming from the MIDI CC to CV node to control some sounds and tweak
the controllers to get the sound that you want, but when you then close the project and open it
again on the next day, the control values are reset back to zero. If you're lucky, then the MIDI
controller is still plugged in and the knobs are still in yesterdays position, so you could wiggle
every knob just a little bit to get the system back to almost the same state as it was.
noisicaä should arguably store the most recent value of each controller, but how should that exactly
work? If every value change is applied as a project mutation, then that implies that these can be
undone, which is kinda weird (how should the software undo a physical movement of some piece of
hardware?), but ok. But this node doesn't have to be connected to some actual hardware input. It
could just as well get its input from some node that generates CC events, e.g. a recorded MIDI
stream or some generative music thing. Then playing back the project would modify the project and
that really doesn't make sense.
My best answer to this problem is that this node simply should not be used to connect hardware
controllers to a project. It should only be used to converted recorded or generated MIDI CCs into
control values, and the current behavior will stay, i.e. the current values is transient and not
persisted. And at some future time it should be possible to link control dials to a hardware
controller - not via a port connection, but rather as an additional UI option. Those value changes
would then be recorded as project mutations, so they become persistent, i.e. the hardware controller
would just be used instead of the mouse, but the effect would be the same.
Another thing that became apparent during this sprint is that the amount of boilerplate code to add
new nodes is still way to big. While I could have easily gone on and add more nodes - especially
ones that can be fully implemented in Faust are pretty easy to do now, i.e. pure DSPs, which don't
interact with MIDI events or need more than the most basic UI. But the urge to do yet another huge
refactoring kept growing. So now I'm at a point where I say to myself: "Look, I added some new
features, can I now [do something fun again](03-03-development-update)?" And I can't really say "no"
to myself.
### What's new
The following new node types have been added:
* Oscillator: a simple oscillator with sine, sawtooth and square waveforms.
* Noise: white/pink noise generator.
* VCA, with optional smoothing of the control signal.
* Step Sequencer, with up to 128 steps and 100 channels, where each channel can produce either a
control value, a gate signal or a short trigger signal.
* MIDI CC to CV, converting MIDI controller change events into a control signal.
The insert node dialog gained some icons and displays information about the nodes, to give you some
more hints about what you're inserting beyond the name itself.
### Internal changes
* A new ProcessorFaust class for nodes based on [faust](https://faust.grame.fr/index.html). For now
only supports statically compiled code, i.e. users cannot (yet) edit and run arbitrary faust code,
like it is possible with Csound.
* Ports can be specified with a list of "label -> value" mappings, and the UI turns that into a
combobox instead of a dial.

View File

@ -0,0 +1,239 @@
Title: The great "Model Merge" refactoring
Date: 2019-04-28
As I mentioned in the [previous post](04-25-development-update) I plan to do yet another great
refactoring. But before I jump head first into this adventure, rip everything apart and rewrite
something like 60% of the code base, I want to take a step back and describe my plan here. Looking
at the [AWStats](http://www.awstats.org/) for this website, I'm well aware that I'm just talking to
a void, but the point is really for myself to reflect on this, think it through one more time and
make sure I'm not overlooking something, which would turn this plan into a nightmare, after I
already spent a lot of time on it.
But before I start explaining my plan, I should first explain the current architecture and what the
problems and benefits of it are.
### The current architecture
The three main components of noisicaä are the UI, the project and the audio engine. The "project" is
the part, which owns the internal data structures that describe the project (the "model") and is
responsible for all changes to it (i.e. the "business logic") as well as persisting those changes to
disk. Those three components each run in different UNIX processes and communicate with a homegrown
IPC system with each other. Above those three processes is a main process, which is the one the user
creates when launching noisicaä, and it only does a little more than starting the other processes as
needed. There are also some other processes, and there is actually one project process for each
opened project, but that doesn't matter in this context.
The "model" is quite similar to e.g. the [HTML
DOM](https://en.wikipedia.org/wiki/Document_Object_Model) - a tree structure of objects, that each
have a certain set of attributes. The authoritative model lives in the project process, but the UI
has a read-only copy, which it uses to build the interface for the project. For every user action
(that modifies the model) the UI sends a command to the project process, which executes it by
performing some sequence of changes to the model. Those changes are recorded as a list of
"mutations", which are persisted on disk and sent to the UI process, where they are applied to the
read-only model, which in turn triggers the necessary interface changes. Changes to the model in the
project process can also trigger updates to the state of the audio engine, but that part of the
system is out of scope of this discussion.
That setup seems to be quite convoluted and over engineered and I cannot really disagree. But I did
have reasons, when I designed this architecture:
#### Failure domains
Software has bugs, applications will crash, but data loss or corruption in unacceptable.
Specifically the UI feels problematic, partly because I haven't really figured out how to write good
unittests for UIs, so my level of trust in my own code isn't that high. Also the UI involves a lot
of native code, i.e. the Qt framework, so a bug often results in a segfault and not in a Python
exception, which I could handle more gracefully.
The code of the project process on the other hand is much simpler, written in pure Python and a lot
easier to test.
By putting the UI and the model in separate processes, a crash of the former cannot corrupt the latter.
#### Clean interfaces
In order to support undo/redo any change to the model must be wrapped in a command. Those commands
can then be reverted or replayed as needed. That pattern seems to be fairly common, e.g. Qt also
[offers it](https://doc.qt.io/qt-5/qundo.html), though I'm not using that implementation for
noisicaä.
Any change of the model outside of a command would lead to corruption, but only in combination of an
undo or redo, so this is hard to detect. And I know I'm lazy, so for rapid prototyping it can be
very tempting to directly change the model from the UI and then "clean it up later" - but actually
forgetting small bits here and there, which would be a source of subtle bugs. This architecture
forces me to go through that "command" interface, because the UI simply has no other way to change
the model.
### Problems of the current architecture
This architecture requires two views of the model, one in the project process and one in the UI
process, which are synchronized by sending mutations from the project process to the UI process over
IPCs. The model in the UI is limited to read-only access, whereas the model in the project process
has all the methods to implement the business logic. This results in a quite convoluted "three
dimensional" class hierarchy. I.e. every class exists three times, one abstract version, which
describes the properties of the class, one read-only implementation and one mutable
implementation. Combined with the "normal" class hierarchy (e.g. a `Track` is a subclass of `Node`,
which is a subclass of the fundamental `ObjectBase`), this results in a complex inheritance mesh,
which is pretty hard to reason about. For example there was a recent bug, which caused me a lot of
head scratching, until I noticed that I forgot one link in the inheritance mesh, but the observed
bug provided no indication that this could have been the cause.
That setup also requires a lot of boilerplate code and the code is spread across many places, which
makes it really annoying to add new classes. This point is what really caused me to questing the
current architecture and look for alternatives.
### The plan
My plan is to not create a new process to host the business logic but rather have a single model
running in the UI process (for each opened project).
But I'm paranoid about data loss or corruption, so I want to spawn a separate process per opened
project, which is responsible for writing the mutations to disk. So if the UI crashes, that writer
can flush out all pending data and the shut down gracefully.
Instead of sending a command from the UI to the project process, the command could be directly
applied to the model in the same process. This will change the order in which updates will
happen. In the old architecture, the changes will be applied completely in the project process,
before they are sent back to the UI, which applies it to its copy of the model and triggers all
required UI updates. If there is a bug in the UI, which causes it to crash while it processes the
mutation, then the command has already been completely applied and persisted in the project
process. Now the UI updates will be triggered while the command modifies the model, and if that
triggers a crash, then the command execution does not complete and nothing gets persisted. That's
probably better as it prevents the model to get into a state that would cause the UI to get into a
crash loop.
Having just a single model would remove the need for the complex class hierarchy. I could merge
those three incarnations of each class into a single class, so semantically related code moves
closer together and a lot of duplication and boilerplate should disappear. But that means that the
UI has at least technically the ability to directly change the model outside of a command execution,
sidestepping the process, which is required for a working undo system. But there is already a system
in place to monitor all changes to the model (which is used to record the mutations during the
executing of a command), so I could just use that to trigger an assertion whenever such a forbidden
model change happens.
Once that first refactoring is done, I would also want to drop the concept of explicit `Command`
classes. All I need is the sequence of mutations that happen as a result of some user action. These
mutations are what is persisted to disk and they can be applied to the project in backward (for
"undo") of forward (for "redo") direction - that's how it already works today. For that it is
sufficient to mark a section of code as "this is a command" and there's not need to actually create
some `Command` instance and pass that around.
In pseudo code the current flow looks like this:
```python
# In the UI, triggered by some widget change:
send_command('UpdateTrack', track_id=track.id, name=widget.text())
# In the project process:
class UpdateTrack(Command):
def run(self):
track = self.get_object_by_id(self.track_id)
track.name = self.name
```
where `send_command()` serialized the command, sends it over IPC to the process project, which would
create the `UpdateTrack` instance, and call its `run()` method while tracking all mutations to the
model ("set the 'name' property of the object '1234' to 'New name'"), and then persisting those
mutations to disk and sending them back to the UI.
Note that `send_command()` is a coroutine, so all those steps that it triggers happen asynchronously
without blocking the UI. Which has so far not caused problems, but in theory a fast user might be
able to trigger further actions, before the results from the previous command have been applied to
the UI, which could cause all kinds of weird behavior.
With the new architecture the same would look like this:
```python
with project_mutations():
track.name = widget.text()
```
Here the `project_mutations()` context manager would do the collection of mutations for persistence,
whereas there wouldn't be any additional action needed for "sending anything back to the UI",
because the changes are directly applied to the model that the UI already monitors.
And it should be possible that everything within that context manager is synchronous, so the user
cannot actually interact with the UI while those changes are applied (which usually should not be a
noticeable time).
### What I lose
#### Static API enforcement
In the old architecture the contract that the UI only has a read-only view of the model can be
statically checked with [mypy](http://mypy-lang.org/), because the classes that the UI sees only
have read-only properties.
This would change, because now the UI sees the one implementation of model, with all the properties
and methods, which are also needed for changing it. The UI is still not allowed to use those
(outside of the explicit execution of a command), but violations of that contract can only be caught
at runtime.
#### Graceful UI crashes
Given the old architecture, the UI process could just restart after a crash and reconnect to the
(unaffected) project process. This could even go as far as not disrupting the audio playback at all,
i.e. the audio would just continue playing, if the UI goes down, and when the UI comes back up,
everything is as if nothing happened (of course the UI might enter a crash loop, if the cause of the
crash is deterministic). Note that this is not implemented, but it could well be done with the old
architecture.
With the new architecture, the project state would be lost along the with UI, if the UI crashes, so
the project would have to be reopened as well. In theory the audio engine would still be unaffected
by such a restart, but reconnecting to it and syncing state would be much harder, if feasible at
all, so all audio engine state related to the project would have to be torn down and recreated when
the project is reopened.
So with a pessimistic outlook, where I assume that the software is unreliable and often crashing,
the old architecture would provide a better user experience. I guess I have to be more optimistic
and just make the code more reliable.
#### Remote UI
Another idea, which is also not implemented, is to allow the UI to connect over the network to a
project running on a different machine. E.g. I could imagine multiple people working on the same
piece of noise simultaneously, each one with their own workstation/laptop. That seems like it would
be fairly straight forward with the old architecture. 90% of that could be achieved by using TCP
sockets instead of UNIX domain sockets for the IPCs between the processes. But the remaining 10%
would still be tricky. Writing distributed systems is a very different pair of shoes, because you
have to be resilient against delays or network failures for every IPC. And for all kinds of file
access, you can't assume anymore that all processes see the same local filesystem.
So while this is a kind of nice idea, I'm not too sure if it's worth pursuing at all.
### What I win
#### Less code
The current architecture requires a lot of boilerplate code and the new architecture should remove a
lot of that. Less code is good.
The amount of code to implement the model should be reduced significantly, because I get rid of that
class triplication that I described above.
Also many commands are actually just changing a single property, and those could be implemented much
more efficiently. A common pattern is to have one UI widget which is connected to a property of one
object. Right now setting this connection up and synchronizing changes in both directions requires a
lot of fairly dumb code - very repetitive, but different enough to make it hard to have a generic
function taking care of it. The new architecture should help here, too.
This code reduction is really my main motivation for this refactoring, and I would be very
disappointed, if the final diff, when I merge it into master, does not have a net negative line
count.
#### Performance
The communication between the UI and the project will not go through the IPC layer, so noisicaä
might become a bit snappier. Though I don't really see a performance issue as it is today, so this
is more of a theoretical improvement.
### Verdict
I don't foresee any issues, which could be showstoppers for this refactoring, and I do think the
gains are worth the losses.
I do have to admit that this write up didn't grant me any new insights. I already had a bunch of
unordered notes and my hope was that turning those into this more structured document would cause me
to see things, which I have missed before. But I'm either still missing something important or there
wasn't anything missing in the first place.

View File

@ -0,0 +1,32 @@
Title: Development update (May 19)
Date: 2019-05-19
The great "Model Merge" refactoring, which I described in the [previous post](04-28-model-merge) has
been
[committed](https://github.com/odahoda/noisicaa/commit/accff25ed0495d14a793f5d5acbec6158b1a1afa).
Overall it was mostly painless and worked the way I planned. In the beginning it felt a bit like a
massacre as I was tearing down a lot of code, and the whole thing was in some messy intermediate
state, but quickly new patterns emerged and the conversion became mostly
mechanical. [mypy](http://mypy-lang.org/) and [pylint](https://www.pylint.org/) were very helpful
tools during this adventure. I basically went from one file to the next, fixing all the issues that
they complained about and at the end, when all pieces fell into place and I had something executable
again, there were just very few bugs left - I think just one issue, which was detected by a unittest
and one or two issues I found through manual testing. The biggest advantage of those tools over
unittests alone is that they produce meaningful reports for a single file even if the code at large
isn't in a usable state. I had to refactor a lot of code in different places before I could even get
any passing unittests and without mypy/pylint I would have been flying blind for that time.
After I executed my plan, I had took another stab at the idea of auto-generating some of the
boilerplate code for model classes. I tried it once before, but abandoned it again, because it
turned out to be too complicated. But with the new, much simpler code structure after the
refactoring, it became much easier to do. That got rid of another batch of "boring" code and adding
new classes (e.g. for more builtin node types) should now be much less cumbersome than it was
before.
According to [SLOCCount](https://dwheeler.com/sloccount/) the total code base shrank by
approximately 3000 source lines of code or 6% (or almost 8% when just looking at the Python code),
which is quite considerable.
With this project out of the way (and out of my head), I should do some "shiny new features" sprint
next.

View File

@ -0,0 +1,71 @@
Title: Development update (June 13)
Date: 2019-06-13
Time for the [next
commit](https://github.com/odahoda/noisicaa/commit/8d7c0cc0d44d270e8ba08a6937d2e921f4c6d758). This
was mostly about streamlining the setup process for the application, but I also added some more new
stuff, which felt related. I could have added a lot more features around managing projects, but I
postponed those to another time.
### What's new
#### More responsive startup
When starting the application, the window now opens much earlier, showing a process bar while the
application is initializing. I decided against using the typical "splash window", because by
directly opening the main application window, I can already show the "open project" dialog and let
the user choose which project to open, before the application is fully initialized. This probably
cuts a few seconds from starting the application to the project being opened. Thanks to Python's
[asyncio](https://docs.python.org/3/library/asyncio.html) package, which I've been using anyway,
this didn't require any significant changes, nor did I have to bother with background threads and
how to communicate with the UI, etc. Coroutines FTW!
#### New "open project" dialog
This "open project" dialog is now a custom widget in the main window (on any new project tab), and
not the standard file dialog. In terms of features and usability it probably needs some more
polishing, but it should be good enough for now.
#### Project debugger
And then there's an initial version of a "project debugger". It can only display the list of
mutations, and you can "truncate" the list, i.e. remove the latest mutations. This can be useful, if
some action got the project into some broken state, so it fails to open. Then you can try to repair
the project by removing the offending changes. Hopefully this won't be needed that often by future
end-users, but right now during development, this happens quite frequently. So far I have just
discarded whatever project I had opened, but at some point I should really start and try to make
something proper with noisicaä. Something I wouldn't just toss away because of some silly bug...
#### "New project" dialog
There's a silly gimmick in the dialog for entering the name of a new project. The project name is
pre-filled by a "random song title generator". It comes up with some funny names and who knows,
perhaps inspires some awesome noises. The rules for those names come from the [Time of Forbidden
Spells](https://github.com/Sundin/Tome-of-Forbidden-Spells), which just happened to be the most
promising candidate when I searched for ["song title generator" on
GitHub](https://github.com/search?q=song+title+generator&type=Repositories). The titles are a
Metal-style, which might not be everyone's cup of tea, nor the most appropriate for noisicaä, which
is more suited for electronic music (at some future point), but I kinda like the results. "Vomit of
the Song" &#x1f918;
### Bug fixes
* I fixed a crash when you deleted a node, closed the project and the reopened it.
* When the project fails to open for whatever reason, a dialog just tells you about it and you get
back to the "open project" dialog - instead of completely crashing noisicaä. Again something that
hopefully doesn't happen often for end-users, but is quite frequent during development.
### Internal changes
I slightly changed the file structure for projects. Now each project has a single directory, and all
files are stored in that directory.
And the only significant change I had to make to get the startup process working smoothly was to
reorganize the instrument library. At least on my machine it contains a few thousand entries, and
initializing that list blocked the main thread for a second or so. I reorganized that code such that
the list is now built in smaller chunks. And as a side effect of that reorganization, that work in
only done once instead of every time an instrument library dialog is opened. In QT speak: there's
now a single InstrumentList object, which implements the
[QAbstractItemModel](https://doc.qt.io/qt-5/qabstractitemmodel.html) API, and all
[QTreeView](https://doc.qt.io/qt-5/qtreeview.html) widgets share that model.

View File

@ -0,0 +1,35 @@
Title: Development update (June 23)
Date: 2019-06-23
The [latest
commit](https://github.com/odahoda/noisicaa/commit/bc9be73125c9342044d4ed0bc50c75c886f126ca) just
adds a single node type.
### What's new
#### MIDI Looper node
This node can record a few beats of MIDI events and plays them back in a time-synced loop.
For now only the most basic features are implemented. I plan to do another pass on this node at some
later time to extend the features (e.g. multiple "patches", editing MIDI events in the pianoroll
widget, etc.).
It does feature a new pianoroll widget, which is just read-only for now and displays the recorded
MIDI events. After adding editing features to this widget, I might be able to reuse it for a
pianoroll track, which I'm planning to add anyway.
And it is the first time that noisicaä can record something in
realtime. Consider this a first prototype of future recording capabilities.
### Bug fixes
* There were `.pyi` files missing for two cython modules. After adding those, `mypy` uncovered some
minor bugs.
### Internal changes
* Initial infrastructure showing the node UIs in a dialog window. That needs some more
work. Eventually it should be possible to pop out the UI to a window for every node.
* Some infrastructure to persist Qt properties in the project's session data.

View File

@ -0,0 +1,93 @@
Title: Development update (July 6)
Date: 2019-07-06
Today's [commit](https://github.com/odahoda/noisicaa/commit/a00e72ed49650be1859b1ac910f150976d214713) has a number of new node types. And pictures!
### What's new
#### Metronome
[[thumb:2019-07-06-metronome.png]] The simplest imaginable metronome. It just makes a tick on every
beat (if playback is running). But you can select the sample that it should play. The audio is not
sent directly to your speakers, you have to wire it up just like any other node.
#### MIDI Monitor
[[thumb:2019-07-06-midi-monitor.png]] The MIDI Monitor just displays a list of the MIDI events that
it receives on its input port. Inspired by [KMidimon](http://kmidimon.sourceforge.net/), but much
simpler. I mostly wanted it for debugging of other MIDI nodes.
#### MIDI Velocity Mapper
[[thumb:2019-07-06-midi-velocity-mapper.png]] The MIDI Velocity Mapper takes a stream of MIDI
events, tweaks the velocity of `noteon` events (other MIDI events are left untouched) and then just
passes them through to its output port.
The main motivation is the little [AKAI APC Key 25](https://www.akaipro.com/apc-key-25) MIDI
controller, which I have on my desk and sometimes use to feed MIDI data into noisicaä (or any other
audio software). It's nice, because it's small and always in arms reach, if I want to doodle
around. But the keys are IMHO not very good. They feel pretty cheap and I have to hit them very
hard to get even a medium velocity out of it. Perhaps that's ok for professional musicians, or
when you're on stage and pumped up with adrenaline. But for a bloody amateur like me, who has
trouble even hitting the right keys, that makes it very hard to use. Putting more energy into my
fingers just reduces the precision and I keep hitting multiple keys at the same time. The Velocity
Mapper basically allows me to adjust the sensitivity of the keyboard.
#### Control Value Mapper
[[thumb:2019-07-06-cv-mapper.png]] The transfer function, which I implemented for the MIDI Velocity
Mapper, was implemented in a way, which is easy to reuse elsewhere. So I just added another node,
which applies such a function to an a-rate control value.
The function just maps some input value to an output value. It currently supports three different
functions:
"Fixed value:" As the name suggests, the output is always a fixed value. Not very useful for control
values, but it makes more sense for the MIDI Velocity Mapper, where you might want to just ignore
the incoming velocities.
"Linear:" Useful for mapping e.g. a [-1,1] signal to [0,1].
"Gamma:" This uses the same function as the "Gamma Correction" known in image processing (or your
monitor settings). Basically brightens or darkens a signal, while still maintaining the full value
range.
#### Oscilloscope
[[thumb:2019-07-06-oscilloscope.png]] This should mostly work like a real oscilloscope. I
think. Because I never used one.
I felt the need to add this one, because while testing out the Control Value Mapper something "did
not sound right". But without being able to visualize the signals, I couldn't tell what the problem
was. It turned out that Csound's [lfo](http://www.csounds.com/manual/html/lfo.html) opcode does not
work as I expected. The manual makes it look like it can produce both k- and a-rate signals. But it
turns out that even when using it with a-rate output, the actual value is only computed at
k-rate. So what I heard was basically an LFO with a super low sample rate.
Currently you can only feed a-rate control values into it.
### Internal changes
* Created a base class for testing processors, removing a bunch of code, which I duplicated over and
over.
### What's next?
While working on those new nodes, it became quite annoying that ports have different types that you
can't easily connect. E.g. plugins generally use k-rate control values for their controls, whereas
my builtin nodes prefer a-rate control values. So if I have an LFO producing an a-rate output, I
can't use it directly to feed some control of a plugin. Or the Oscilloscope should also be able to
display audio data. And so on.
In some cases it can make sense to allow connections between ports with different types. A k-rate
control output could just be interpolated to feed into an a-rate control input. An audio output can
feed directly into an a-rate control port (not the other way around, because control values can be
in any range, and you really don't want to feed that into your amp). So noisicaä could do some
auto-conversion has needed.
Or nodes could be able to accept different connections to their ports. E.g. the Oscilloscope would
declare that its input port can accept k-rate and a-rate control values as well as audio
signals. And then just process that input appropriately. But that capability would be limited to
builtin nodes, because plugin don't know how to do that.
So that's probably what I'm going to work on next.

View File

@ -0,0 +1,39 @@
Title: Development update (July 14)
Date: 2019-07-14
This [commit](https://github.com/odahoda/noisicaa/commit/66011fa94a8e5eca2a5ce38f3d47a211a8f7ac66)
implements most of what I was talking about [last week](/blog/2019/07-06-development-update).
### What's new
#### Variable type ports
Ports can now be declared with multiple types. Two ports can be connected, if they have at least one
type in common.
This feature is currently used for
* the Oscilloscope node, so it can also graph audio and k-rate control signals.
* the Oscillator node, so its input and output ports can be connected to any audio or a-rate control
signal.
More nodes should make use of that feature in the future. Ideally this should remove the pain of
constantly thinking about k-rate vs. a-rate control signals, but this feature is limited to builtin
nodes. So I might still need some way of automatically converting between incompatible port types. I
still don't like the idea of an implicit conversion of a-rate control signals to k-rate, because
that's a lossy conversion. But I also expect that to be frequently needed, because plugins generally
use k-rate control values for their controls.
### Internal changes
* Processors now get references to `Buffer` instances, instead of raw data pointers. The processor
can query the type of the buffer to determine which of the declared types is actually being used.
* Tracking the connected buffers is now done completely in the `Processor` base class. Subclasses
just had a lot of trivial code to reimplement the same thing over and over again, but there was no
real need for that level of customization.
* Refactored tests for processors, removing a lot of redundant code. Because of this, this commit
even has a slightly negative line count!
* Connections now track their type. This change is compatibility breaking.

View File

@ -0,0 +1,125 @@
Title: waf migration
Date: 2019-07-20
I could not resist and spent the last few days converting the build system from
[`cmake`](https://cmake.org/) to [`waf`](https://waf.io/).
Surprisingly the resulting
[commit](https://github.com/odahoda/noisicaa/commit/0e4c0ddf10143d768de71eb76ce81417db8a2411) has a
negative line count, even though the new system has some new features.
I was never really happy with `cmake`. I needed a proper build system, because noisicaä isn't a pure
`python` application (which most of the other things are, that I do in my spare time). For the
initial prototype I got away with using `setup.py` to build the few `cython` extensions, but I
quickly grew out of that. My usual choice would have been good, old
[`make`](https://www.gnu.org/software/make/), but one of the common patterns of noisicaä development
is to upgrade my toolbox and try something more modern. `cmake` advertises itself as modern, fast
and flexible, and it's used by a lot of large open source projects. So I decided to give it a try.
And it's certainly easy to use for `c++` projects - everything needed for that is
built-in. But my case involved a mix of `python`, `cython` as well as a bunch of custom build rules
to auto generate various files, e.g. using `csound`, `faust`, etc. To define such custom build
rules, you have to dive deep into `cmake`'s language and that's the part that turned me off.
`cmake`'s language feels like a reanimated corpse that dies a few decades ago. It's a macro
language, there are no proper functions, so "return values" are written to some kinda global
variables. While you can get everything done somehow, it feels really quirky.
Also `cmake`'s distinction between files and targets was very confusing to me. Only target can have
dependencies on other targets - at least that's how I understood it. I could never really wrap my
head around it, so I have probably created a lot more targets than necessary.
`waf` is not without it's own quirks. Part of that might be due to compatibility requirements. On
the one hand compatibility with older versions of `waf`, i.e. the usual accumulation of cruft that
you see in any non-trivial application. On the other hand compatibility with old `python`
version. E.g. `waf`'s `Node` class looks a lot like
[`pathlib`](https://docs.python.org/3/library/pathlib.html), but it cannot use it as long as it
wants to stay compatible with `python` versions before 3.4. And because it wants to be both self
contained and small, it can't just use packages outside of the `stdlib`.
But the biggest selling point of `waf` is that it's `python`. Instead of some half baked custom
language, you have the full power of a proper general purpose language at your fingertips.
Converting the existing build logic over to `waf` was pretty painless. Basically just rename all
[`CMakeLists.txt`](https://github.com/odahoda/noisicaa/blob/371337a0/noisicaa/core/CMakeLists.txt)
files to [`wscript`](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/noisicaa/core/wscript) and
make some fairly mechanical syntax changes. Most of the work was reimplementing the various custom
rules to `waf`. That turned out to be a bit more
[verbose](https://github.com/odahoda/noisicaa/tree/0e4c0ddf/build_utils/waf) in terms of line count,
but that's more than compensated just by expressing the logic in a proper programming language.
The only significant difference should be that I never got dependencies of `cython` modules right in
`cmake`. Every once in a while I made a change that should have triggered a rebuild of some `cython`
modules, but that never happened - resulting in runtime errors. Getting that right in `waf` was very
straight forward.
Once I did the basic build logic carried over to `waf`, I started looking into things I could do
better now.
### Handling of 3rd party dependencies
One of the things that neither `cmake` nor `waf` (nor any other build system that I know of)
directly support, is the handling of dependencies on 3rd party packages. That's actually something
that could be done with `setup.py`, which is more of a package management system than a build
system - at least for other `python` packages (and with a terrible hack, also for non-`python`
libraries).
Before you can start to build any software from source, you probably need a bunch of libraries,
packages, tools, etc. installed, which are used by those sources. Usually the `configure` step of
the build system checks for these dependencies and fails if anything is missing. It is then left to
the user, possibly directed by some `INSTALL` documentation, to install the missing pieces and try
again. That usually takes a few iterations until everything is in place. That's pretty tedious for
the user, and it's hard to keep the documentation in sync with the actual requirements of the
software.
I used to have a [`listdeps`](https://github.com/odahoda/noisicaa/blob/371337a0/listdeps) script,
which had all the knowledge about required packages, both `python` packages that are to be installed
with `pip`, as well as system packages to be installed with `apt`. This way the list of required
packages was encoded in a way that did not need human intelligence to decipher. I have fully
automated [tests](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/noisidev/runvmtests.py), which
perform the installation on a minimal system (running in a VM), thus ensuring that the list of
requirements is complete.
I now took this approach a step further and integrated it directly into `waf configure`. So `waf`
checks if all required packages are present and if not, simply installs them. `python` packages are
downloaded and installed in a virtual environment. System packages are installed with `apt`, which
might trigger a password prompt. Usually the user does not expect `configure` to actually modify the
system, so that behavior must be enabled by some flags.
There were a few dependencies (e.g. `csound`, `protoc`, ...), which are not available as easily
installable packages - usually because I want a newer version that what is available in Ubuntu. For
those packages I used that terrible hack I mentioned above. For each such dependency I had a
[`setup.py`](https://github.com/odahoda/noisicaa/blob/371337a0/3rdparty/csound/setup.py) file, which
I could install with `pip`. But instead of having any real sources of its own, those `setup.py`
scripts just had the logic to download, build and install the sources of those packages into the
virtual environment. I have now moved that logic directly into
[`waf`](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/build_utils/waf/virtenv.py#L651), but
apart from that I'm following the same approach: instead of documenting that you should download and
install library X at version Y, the build system just does it itself.
### Transparent virtual environment handling
As usual for `python` projects, I'm using a virtual environment to locally install 3rd party
`python` packages. And I go one step further and also install locally built versions of `csound`,
`protoc`, etc. there as mentioned above.
The normal process is to first "activate" the virtual environment, so these packages become visible
to `python`.
I now moved the management of the virtual environment into `waf`, so it becomes completely
transparent to the user. It is
[created](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/build_utils/waf/virtenv.py#L278) and
[populated](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/build_utils/waf/virtenv.py#L144) by
`waf configure` and automatically activated by subsequent uses of
[`waf`](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/wscript#L35) or when running
[noisicaä](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/bin/noisica%C3%A4#L28) from the build
directory.
### `waf install`
Just out of curiosity I tried out what happens when I run `waf install`. And it turned out that the
changes to get that work correctly weren't that difficult. The trickiest part was again dealing with
those 3rd party packages. For noisicaä to work, it needs those packages and libraries installed with
its own files. I ended up just
[copying](https://github.com/odahoda/noisicaa/blob/0e4c0ddf/build_utils/waf/install.py#L57) those
files from the virtual environment into the target `lib` directory.

View File

@ -0,0 +1,92 @@
Title: Development update (August 11)
Date: 2019-08-11
And another
[commit](https://github.com/odahoda/noisicaa/commit/53ccc97183a04c96a24c06f1c0eb90a05af6a1fe), which
only improves the development infrastructure, this time focused on testing.
### `./waf test`
I removed the `bin/runtests` script, which collects and runs all tests, and moved that logic into
`waf`, so tests are now run with the command `./waf test`. The main benefit is that I'm getting
parallel execution almost, but not quite, for free.
The drawbacks are:
* Each test module is executed in a subprocess, which causes some overhead. That's specifically
noticeable for the unittests, where the tests themselves are fast and thus the overhead matters
more.
* `pylint` is now also run as a subprocess. `runtests.py` imported it as a `python` module and
subsequent `pylint` runs could use previously cached data.
* I needed some additional code to store the results of each tests and merge them at the end. For
the unittests I'm using
[`unittest-xml-reporting`](https://pypi.org/project/unittest-xml-reporting/) to write them out as
XML and [`xunitparser`](https://pypi.org/project/xunitparser/) to read them back. For `mypy` and
`pylint` I'm just writing/reading plain text files.
* `mypy` needed some extra care, because running multiple `mypy` processes using the same cache
directory causes some race conditions. I have to use a pool of caches, so each cache is only used
by one `mypy` process at a time.
But overall the test suite now runs about twice as fast. Some unscientific benchmarks:
| command | runtime |
| :--------------------------- | ------: |
| `bin/runtests` | 15:49 |
| `./waf test -j8` | 7:16 |
| `./waf test -j4` | 8:36 |
| `./waf test -j2` | 13:58 |
| `./waf test -j1` | 26:31 |
| `bin/runtests --tags=unit` | 0:42 |
| `./waf test -j8 --tags=unit` | 0:23 |
| `./waf test -j4 --tags=unit` | 0:30 |
| `./waf test -j2 --tags=unit` | 0:51 |
| `./waf test -j1 --tags=unit` | 1:35 |
I just used a single run of each command and `/usr/bin/time` for the measurement.
The overhead is about 2x, so you need two cores just to make of for it. Which means that you're
penalized, if you are attempting to do noisicaä development on a single core machine. But I guess
using a single core machine (which must be pretty old) isn't much fun anyway these days.
What is interesting is that there is little gained when going from four to eight cores. That's
probably because my CPU is a quad-core with 2x hyper-threading, but I haven't looked into that issue
any deeper.
Another advantage of running tests in subprocesses is that I can now put a timeout on tests, so if
some test hangs, it will eventually fail and I don't have to kill the main process, thus losing the
actual test report.
### VM Tests
I reanimated the VM tests, which suffered from some bitrot and didn't work anymore. While at it, I
switched from `virtualbox` to `qemu` for the VM, because `qemu` is a bit easier to automate.
These tests launch a VM with a minimal installation of the OS (current only Ubuntu 16.04 and 18.04)
and then build noisicaä from the sources. This is mostly to verify that all dependencies are
correctly declared and the build instructions work as advertised on a system that isn't my
development system.
Once I got the tests working again, they uncovered some bitrot, which caused noisicaä to not work
anymore on Ubuntu 16.04 or Python 3.5.
### `clang-tidy`
The test suite now runs `clang-tidy` over `C++` source files. I previously cranked up the pickiness
of `gcc` (i.e. `-Werror -pedantic`), but that meant that compilation would fail for every minor
issue. Now building became more "pythonic": `C++` source get built, as long as there are no major
issues (with all warnings disabled, i.e. `-w`), and once I want to also have the code "clean" I run
`clang-tidy` over it.
I haven't verified, if the issues that `gcc` would have detected are all covered by
`clang-tidy`. I'm just assuming that `clang-tidy` is "good enough". I also have not yet attempted to
fine tune `clang-tidy` and run it with default settings for now.
### Upgraded `mypy` to 0.720
That version specifically has a new option `warn_unused_ignores`, so I could find and remove
overrides, which used to fix false-positives in some previous version of `mypy`, but were now
obsolete and potentially masked some real issues.
### Removed all build noise
At least with my setup, all random noise generated during the build steps has been removed.

View File

@ -0,0 +1,42 @@
Title: Development update (September 8)
Date: 2019-09-08
Finally [a
commit](https://github.com/odahoda/noisicaa/commit/f81b4e38895b7ab22a0a72bd6a7e76ca61a5087f) with
some major new feature: Piano roll tracks.
### What's new
#### Piano roll tracks
[[img:2019-09-08-pianoroll-track.png]]
This isn't exactly an distinguishing feature of noisicaä (as the score tracks would be), but a
pretty standard feature of a DAW, so it's good to have.
For now I just implemented the most basic editing features to make it useful. There's no editing of
CC events, no recording, no importing of MIDI files. One editing feature, which I would consider
"basic", is still missing though: copy & paste. There is some clipboard support in other track
types, but I'm not happy with it, so I didn't want to add more cruft to that. The next thing I want
to tackle is a proper design of the copy & paste system, which should be ready for current and
future use cases.
#### Some minor UI tweaks
* The position of the splitter between the track list and the graph canvas is now persisted.
* The track list stays centered when changing the time scale (`ctrl-left` and `ctrl-right` -
hmm... that's pretty well hidden...).
### Internal changes
* Tracks are now `QWidget`s, which simplifies the UI event handling and allows to use the existing
`PianoRoll` widget to be used for the MIDI segments.
* Extended the existing `PianoRoll` widget (as introduced for the [MIDI
Looper](/blog/2019/06-23-development-update.md) node) to support editing and multiple MIDI
channels.
* The existing `PianoRollProcessor` has been extended to handle multiple segments, so it can be used
by the new `PianoRollTrack`, while keeping compatibility with existing uses by `ScoreTrack` and
`BeatTrack`.
* The tool-based UI event handling has been streamlined.
* I switched to using `QAction`s with shortcuts, instead of explicit keyboard event handling, to
trigger keyboard shortcuts (after I figured out how to make that work properly).

View File

@ -0,0 +1,92 @@
Title: PySide2 non-migration
Date: 2019-09-22
I was about to finish another sprint (implementing a saner copy&paste system, as I [hinted
earlier](/blog/2019/09-08-development-update.md)). So I ran `mypy` over the sources and saw that it
complains about a bunch of things, which are caused by an issue with `PyQt5`.
`PyQt5` seems to not like classes, which explicitly inherit from multiple "`Q`" classes.
E.g. something like that:
```python
class SomeMixin(QtCore.QWidget):
def someMethod(self):
self.update()
class SomeLineEdit(SomeMixin, QtCore.QLineEdit):
...
```
`QLineEdit` is a subclass of `QWidget`, so the above should be perfectly fine. But not for
`PyQt5`. When I tried to make a minimal example, it just segfaulted, but I vaguely remember seeing
some exception being raised. To workaround that, I had to make such mixin classes not inherit from a
"`Q`" class (e.g. just `object`), which is perfectly fine at runtime. But `mypy` has no idea that
this mixin is only used with some kind of `QWidget` and that `self.update()` is a valid method. So I
have to make `mypy` suppress all those false-positive warnings, which makes the code look ugly, and
I lose type checking for any real issues.
Besides that `PyQt5`'s support for type annotations was not so great anyway. I'm maintaining my own
set of stubs for it, which are based on the original `PyQt5` stubs, but with lots of manual tweaks
to make them actually useful.
So when I saw that `Qt` now [officially includes Python
bindings](http://blog.qt.io/blog/2018/12/06/qt-5-12-lts-released/) in the shape of
[`PySide2`](https://wiki.qt.io/Qt_for_Python), I was interested in evaluating a migration. There was
an open [ticket for adding stubs](https://bugreports.qt.io/browse/PYSIDE-735), but that was fixed
for version 5.13.
It's easy to install via `pip`, so I made a quick test to see if `PySide2` was also suffering from
the inheritance issue above. It wasn't, so let's give a real migration a try.
`PySide2` looks sufficiently similar to `PyQt5`, that you could think a simple `s/PyQt5/PySide2/`
might already be enough. For some projects it might already be that, plus some more trivial renames
like `pyqtSignal`&rarr;`Signal` or `pyqtProperty`&rarr;`Property`.
But there are more subtle differences, which make it much harder (at least for noisicaä)...
* The `connect()` method of signals returns a boolean. Apparently it can fail (no idea under which
conditions...), so you should probably check the return value. Which is very unpythonic - it
should just raise an exception. And in `PyQt5` it returns a `Connection` instance, which can be
passed to `disconnect()`. That's the only way to disconnect a `lambda` function (without carrying
a reference to that function around), which I do a lot, so that's annoying.
* The way how a class level `Signal` attribute gets turned into an instance level `SignalInstance`
attribute looks odd. In `PyQt5` the `pyqtSignal` implements the property protocol, so accessing
the attribute on an instance returns the appropriate `pyqtBoundSignal` instance. In `PySide2` the
metaclass does somehow create a `SignalInstance` for each `Signal` and [injects that into the
instance's
`__dict__`](https://code.qt.io/cgit/pyside/pyside-setup.git/tree/sources/pyside2/libpyside/pyside.cpp?h=5.13.1#n313),
though I haven't really figured out when this actually happens. The problem for me is that there
doesn't seem to be a way to get from an `Signal` instance and the owning object to the
`SignalInstance`. Which I do in some helper function, which saves me a lot of boilerplate code. I
could find a workaround, but that is really ugly... Parsing the `str()` of the `Signal`
instance... I won't say more. Too embarrassing.
* Signals cannot use [an `Enum` as the type](https://bugreports.qt.io/browse/PYSIDE-239). That bug
is already 5 years old and for an ancient version of `PySide`. The workaround is to declare those
signals with type `object` (and lose some type safety).
* `QSettings.value()` does not return the default value, if it is `0` or `False`. That seems like a
plain and simple bug.
Those are the issues, which I have found so far. At least the unittests are now passing, but that
doesn't really mean that much, because the test coverage for the UI code isn't that great. And
getting there wasn't easy, because `PySide2` is also very crash happy. So instead of a nice Python
exception telling me where and what was wrong, I just got the unhelpful "Segmentation fault (core
dumped)" message. I had to perform the "install from source" dance to get a version with debug
symbols, so `gdb` could at least tell me something about the problem.
And now I'm getting this exception:
```text
Traceback (most recent call last):
[...]
File "/home/pink/noisicaa/build/noisicaa/ui/control_value_connector.py", line 62, in __init__
self.valueChanged.connect(self.__onValueEdited)
TypeError: connect() takes 3 positional arguments but 4 were given
```
Sorry, but that simply does not make sense.
It would have been nice to have a viable alternative to `PyQt5`, and perhaps `PySide2` is that, if
you're starting a new project from scratch. But migrating noisicaä does not seem worth the effort,
at least not now. There is at least some hope that development of `PySide2` gets a boost, at least
for a while, now that it has been included in the `Qt` canon. Let's give it some more time.

View File

@ -0,0 +1,38 @@
Title: Development update (September 23)
Date: 2019-09-23
As [mentioned before](/blog/2019/09-08-development-update) I worked on the copy&paste system, which
has just been
[committed](https://github.com/odahoda/noisicaa/commit/f4c583ee5b9da3f85cfabe6eaaa91bfaa8ff79cb).
### What's new
The only user visible change is that you can now copy&paste segments in piano roll tracks, as well
as MIDI notes within segments.
### Internal changes
noisicaä now uses the system clipboard (via `Qt5`'s API) to store the copied items. The items are
serialized in a common protobuf message, with extensions for the various things that can be
copied. That new system has been used to add copy&paste support to piano roll tracks and the
existing copy&paste functionality for score and beat tracks has been migrated to it.
There is now a single class, which keeps track of the clipboard contents and the current focus
widget. It also owns the `QAction`s for "Cut", "Copy", "Paste" and "Paste as link", and decides
which of these actions are enabled based on the current state. If one of those actions gets
triggered, it is sent to the current focus widget, which then implements the actual business logic.
That seems to work pretty well, and the code looks much cleaner than the previous dispatching of
method calls. But I'm not 100% sure that I covered all cases in which the state of the actions needs
to be updated. I have the feeling that there are conditions in which e.g. "Copy" should be enabled,
but isn't, or vice-versa.
Another thing where I'm not quite finished is how the sets of selected items are managed. Currently
the three widgets, which do support selections, implement that independently. So there's a bit of
repetitive code, which is quite similar, but with more or less subtle differences. Some of those
differences should be removed, so all widgets exhibit the same behavior (which is currently quite
inconsistent, including very different colors to highlight selected items). But other differences
are inherent to the nature of those widgets. E.g. items, which are lined up on a linear time line
(like segments on a piano roll track) are different from items places on a two dimensional canvas
(like notes in a piano roll segment or nodes on the pipeline graph canvas). I'd like to get a
better feeling for what should be shared and what is specific to each widget, before I try to come
up with some class that factors out the selection management.

View File

@ -0,0 +1,57 @@
Title: Development update (January 3)
Date: 2020-01-03
There hasn't been an update in a while. And there hasn't been a lot of work on noisicaä either.
I started a new sprint back in September, but then there was a vacation, and that yanked me out of my
routine, and it took me an awful long time to find my way back.
It wasn't like I was not doing anything. I spent some time improving my emacs configuration,
including setting up a configuration for my daughter's novel writing ambitions. We bought one of
those all-in-one printer-scanner-fax[^1] things, which triggered me to write the beginnings of a
little document management system, so we can eventually move to a (mostly) digital archival for
documents that you're supposed to keep. I reinstalled my dying home server on a Raspberry Pi 4,
which went surprisingly smooth.
But those were the kind of in-between projects that you do to distract yourself, not something to
focus. But perhaps that is just what I need every once in a while, as I feel that I have rebuilt
quite a bit of energy over the past few days. Now I "just" have to direct that energy towards
noisicaä.
Anyway. Just to wrap up the stuff that I was doing (mostly back in September), here's the latest
[commit](https://github.com/odahoda/noisicaa/commit/bae6940b3649995e59ce4c0a0670a321f49b5682).
There are still plenty of bullet points on my checklist, which I initially planned to do in that
sprint, but I'll defer those to another time.
Even the commit was already done three weeks ago, and I couldn't get myself to write the
accompanying blog post until now.
### What's new
#### Track list improvements
[[thumb:2020-01-03-track-list.png]] I did quite a bit of refactoring and improving of the track
list. Tracks can now be resized vertically and reordered with drag-n-drop. There's also a new zoom
function which zooms in both dimensions, but that's only accessible via keyboard shortcuts with no
visible hint that those exist. And so far only pianoroll tracks currently handle resizing correctly.
#### Load test projects
[[thumb:2020-01-03-loadtest-project.png]] There's a hidden feature (I'm not telling you how to
access it &#8212;&nbsp;read the source to find out) to create a new project filled with random
data. This is really just a development tool for myself, so the usability is not that great and that
won't change. Like the name suggests, I can use this feature to stress test noisicaä and find
performance bottlenecks.
### Internal changes
- Some refactoring of the process startup, which previously triggered some scary warning messages
(`"RuntimeWarning: 'noisicaa.core.process_manager' found in sys.modules after import of package
'noisicaa.core', but prior to execution of 'noisicaa.core.process_manager'; this may result in
unpredictable behaviour"`).
- A new `move` mutation type to efficiently reorder lists.
[^1]: Yeah, that stuff is still around. I have no use it or intend to even plug it into a telephone
socket. But it's not like we wasted money on a feature that we're not using, given how dirt
cheap the hardware itself is &#8212;&nbsp;until you have to buy the first toner refill.

View File

@ -0,0 +1,41 @@
Title: Development update (February 21)
Date: 2020-02-21
Took a while, but I finally managed to kick myself back into a more productive mood. It was just a
matter of getting started, but as usual that's the hardest part.
So here's the [latest commit](https://github.com/odahoda/noisicaa/commit/eb2abe9cb0ed85cf0adae5baeeecb2147153f27b).
There were also some more commits to the master branch afterwards, but those were just maintenance
work, i.e. upgrading the package dependencies, incl. `mypy`/`pylint`, which triggered some code
cleanups. I just did not bother to create a new branch for that.
### What's new
#### Sample track improvements
[[thumb:2020-02-21-sample-tracks.png]] I made various improvements to the sample tracks (which I
should really rename to "Audio Track") to make them at least somewhat usable - for my current
purposes, which is to just import an existing song and then try to decompose it into its parts and
rebuild those using noisicaä. I.e. I do not need anything fancy, just the ability to import an `MP3`
or `FLAC` file into a track and play it back. And do that without the UI blowing up.
Audio files are either read with [`libsndfile`](http://www.mega-nerd.com/libsndfile/) (like before)
or piped through [`ffmpeg`](https://www.ffmpeg.org/) for formats like `MP3` or `AAC`.
To get reasonably performant rendering, the audio data is split into chunks, which are rendered
asynchronously and then cached. Even though the main number crunching is done by
[`numpy`](https://numpy.org/), reading directly from a mmap'ed files, you still notice that this is
`Python` (i.e. it could benefit quite a lot from a `C++`/`Cython` version).
The audio data is rendered as [`rms`](https://en.wikipedia.org/wiki/Root_mean_square) and min/max
(should be the same way as [`Audacity`](https://www.audacityteam.org/) does it). And stereo files
are now correctly rendered as well.
There are still no advanced features, like editing, enveloped, disk streaming, etc.
### Internal changes
* Audio files are now decoded into raw files (32bit floats, single channel per file) into the
project directory, which can be directly loaded into memory for playback.
* Some test coverage improvements.

View File

@ -0,0 +1,33 @@
Title: Development update (February 22)
Date: 2020-02-22
Oops, already the [next
commit](https://github.com/odahoda/noisicaa/commit/8fd99a2ef97b1dc89ac517d857db80f0dd8f2383), though
not a very big one (and I have to admit that I was already working on it when I wrote yesterday's
update).
### What's new
#### New toolbar
[[thumb:2020-02-22-toolbar.png]] The toolbar has been redesigned. Instead of using the normal
`QToolBar` widget, this is now using a custom layout. I rearranged the buttons, added buttons to
move the playhead back and forward by a single beat, added a widget to display the current time
(both in musical time and wall time), added a VU meter to display the master output level and move
the engine load graph from the status bar at the bottom of the window into the toolbar.
#### VU Meter node