I started working on this project not really knowing where it would end up. There was a fair amount of trepidation with this project working in terms of:
- Would it create output that I could work with?
- Would it produce reasonable results with a reduced data set?
- Would it still kind of sound like a version of “me”?
- Could I work creatively with this AI version of “me”?
Well, I have to say, I’m very surprised by the results of my first track. Not only does it sound like me, but it also does a great job at making the production experience pain free.
Now, this is only the first track, and other inputs may produce random outputs. But I’m happy with this as a first produced track for the EP.
So, without further a due, here is a link to the WIP for my first track of the EP:
After listening I think it’s best to go through the thoughts that I was going through at the beginning of this process to see how and what has changed in my creative and cognitive conception of how the process is going, and the opportunities that this can have both creatively and in terms of workflow.
Would it create output that I could work with, and would it produce reasonable results with a reduced data set?
I need to be honest here, the first outputs from the AI were not great. I tried with my versions of default weights with pitch, step, and duration, all at 1.0. The results were not great:
It’s boring, and I’ve used a lot of techniques to kind of hide the main melody. Not a great start and it had me questioning myself for choosing to do this as a project.
I thought it could be the size of my model data, or the input that I gave it. This led me to increase the amount of training data that I gave to the model. It also made me research how to use the “.repeat()” function in TensorFlow to selectively add more synthetic data into the dataset. After doing this, I still had issues with the output, so further research pointed me to the use of weights and their application. I’ll go into weights in a further blog post, but this important topic tells the model what I want it to find as important.
Changing the weights to what you can see in Figure 1 made the outputs for Figure 2 and Figure 3.



This created something that sounded a lot closer to the kind of output that I was looking for. And created a back and forth between me and the AI that isn’t too dissimilar to working with a human collaborator when brainstorming at the beginning of the creative process.
Would it still kind of sound like a version of “me” and could I work creatively with this AI version of “me”?
I’m not sure if my influence over this piece is purely my production values or that the melody is very much me. But I have to say that an interesting thing has occurred when getting output from the AI. Almost all the output that I’m getting from ST4RT+ is in the key of C minor. While I trained the model with pieces in different keys, the bulk of what I create when writing is to default to C minor. I have no idea why I do this; I just like that key. But I found it interesting that ST4RT+ both picked up on this, and the outputs certainly sound like me because of it.
In a way, it makes the output more like it is a part of me. And that helps with the pieces feeling familiar. Does it sound like me? Yes. Can I work with the AI creatively? A resounding “YES”!