Experiment 9: Wavetable Sampler

This month’s experiment is a considerable step forward on three fronts: Organelle programming, EYESY programming, and computer assisted composition. In terms of Organelle programming, rather than taking a pre-existing algorithm and altering it (or hacking it) to do something different, I decided to create a patch from scratch. I first created it in PureData, and then reprogrammed it to work specifically with the Organelle. Creating it in PureData first meant that I used horizontal sliders to represent the knobs, and that I sent the output to the DAC (digital to analog converter). When coding for the Organelle, you use a throw~ object to output the audio.

The patch I wrote, Wavetable Sampler, reimagines a digital sampler, and in doing so, basically reinvents wavetable synthesis. The conventional approach to sampling is to travel through the sound in a linear fashion from beginning to end. The speed at which we travel through the sound determines both its length and its pitch, that is faster translates to shorter and higher pitched, while slower means longer and lower pitched.

I wanted to try using an LFO (low frequency oscillator) to control where we are in a given sample. Using this technique the sound would go back and forth between two points in the sample continuously. In my programming I quickly learned that two parameters are strongly linked, namely the frequency of this oscillator and the amplitude of the oscillator, which becomes the distance travelled in the sample. If you want the sample to be played at a normal speed, that is we recognize the original sample, those two values need to be proportional. To describe this simply, a low frequency would require the sample to travel farther while a higher frequency would need a small amount of space. Thus, we see the object expr (44100 / ($f1)), with the number 44,100 being the sample rate of the Organelle. Dividing the sample rate by the frequency of the oscillator yields the number of samples that make up a cycle of sound at that frequency.

Obviously, a user might want to specifically move at a different rate than what would be normal. However, making that a separate control prevents the user from having to mentally calculate what would be an appropriate sample increment to have the sample play back at normal speed. I also specified that a user will want to control where we are in a much longer sample. For instance, the sample I am using with this instrument is quite long. It is an electronic cello improvisation I did recently that lasts over four minutes.

The sound I got out of this instrument was largely what I expected. However, there was one aspect that stood out more than I thought it would. I am using a sine wave oscillator in the patch. This means that the sound travels quickly during the rise and fall portion of the waveform, but as it approaches the peak and trough of the waveform it slows down quite dramatically. At low frequencies this results in extreme pitch changes. I could easily have solved this issue by switching to a triangle waveform, as speed would be constant using such a waveform. However, I decided that the oddness of these pitch changes were the feature of the patch, and not the bug.

While I intended the instrument to be used at low frequencies, I found that it was actually far more practical and conventionally useful at audio frequencies. Human hearing starts around 20Hz, which means if you were able to clap your hands 20 times in a second you would begin to hear it as a very low pitch rather than a series of individual claps. One peculiarity of sound synthesis is that if you repeat a series of samples, no mater how random they may be, at a frequency that lies within human hearing, you will hear it as a pitch that has that frequency. The timbre between two different sets of samples at the same frequency may vary greatly, but we will hear them as being, more or less, the same pitch.

Thus, as we move the frequency of the oscillator up into the audio range, it turns into somewhat of a wavetable synthesizer. While wavetable synthesis was created in 1958, it didn’t really exist in its full form until 1979. At this point in the history of synthesis it was an alternative to FM synthesis, which could offer robust sound possibilities but was very difficult to program, and digital sampling, which could recreate any sound that could be recorded but was extremely expensive due to the cost of memory. In this sense wavetable synthesis is a data saving tool. If you imagine a ten second recording of a piano key being struck and held, the timbre of that sound changes dramatically over those ten seconds, but ten seconds of sampling time in 1980 was very expensive. Imagine if instead we can digitize individual waveforms at five different locations in the ten second sample, we can then gradually interpolate between those five waveforms to create a believable (in 1980) approximation of the original sample. That being said, wavetable synthesis also created a rich, interesting approach to synthesizing new sounds such that the technique is still somewhat commonly used today.

When we move the oscillator for Wavetable Sampler into the audio range, we are essentially creating a wavetable. The parameter that effects how far the oscillator travels through the sample creates a very interesting phenomenon at audio rates. When that value is very low, the sample values vary very little. This results in waveforms that approach a sine wave in their simplicity and spectrum. As this value increases more values are included, which may differ greatly from each other. This translates into adding harmonics into the spectrum. Which harmonics are added are dependent up the wavetable, or snippet of a sample, in question. However, as we turn up the value, it tends to add harmonics from lower ones to higher ones. At extreme values, in this case ten times a normal sample increment, the pitch of the fundamental frequency starts to be over taken by the predominant frequencies in that wavetable’s spectrum. One final element of interest with the construction of the instrument in relation to wavetable synthesis is related to the use of a sine wave for the oscillator. Since the rate of change speeds up during the rise and fall portion of the waveform and slows down near the peak and the valley of the wave, that means there are portions of the waveform that rich in change while other portions where the rate of change is slow.

Since the value that the oscillator travels seems to be analogous to increasing the harmonic spectrum, I decided to put that on knob four, as that is the knob I have been controlling via breath pressure with the WARBL. On the Organelle I set knob one to set the index of where we are in the four minute plus sample. The frequency of the oscillator is set by the key that is played, but I use the second knob as a multiplier of this value. This knob is scaled from .001, which will yield a true low frequency oscillator, to 1, which will be at the pitch played (.5 will be down an octave, .25 will be down two octaves, etc.). As stated earlier, the fourth knob is used to modify the amplitude of the oscillator, affecting the range of samples that will be cycled through. This left the third knob unused, so I decided to use that as a decay setting.

The PureData patch that was used to generate the accompaniment for this experiment was based upon the patch created for last month’s experiment. As a reminder, this algorithm randomly chooses between one of four musical meters, 4/4, 3/4, 7/8, and 5/8, at every new phrase. I altered this algorithm to fit a plan I have for six of the tracks on my next studio album, which will likely take three or four years to complete. Rather than randomly selecting them, I define an array of numbers that represent the meters that should be used in the order that they appear. At every phrase change I then move to the next value in the array, allowing the meters to change in a predetermined fashion.

I put the piece of magic that allows this to happen in pd phrasechange. The inlet to this subroutine goes to a sel statement that contains the numbers of new phrase numbers expressed in sixteenth notes. When those values are reached a counter is incremented, a value from the table meter is read, which is sent to the variable currentmeter and the phrase is reset. This subroutine has four outlets. The first starts a blinking light that indicates that the piece is 1/3 finished, the second outlet starts a blinking light that starts when the piece is 2/3 of the way finished. The third outlet starts a blinking light that indicates the piece is on its final phrase. The fourth outlet stops the algorithm, bringing a piece to a close. Those blinking lights are on the right hand side of the screen, located near the buttons that indicate the current meter and the current beat. A performer can then, with some practice watch the computer screen to see what the current meter is, what the current beat is, and to have an idea of where you are in form of the piece.

This month I created my first program for the EYESY, Basic Circles. The EYESY manual includes a very simple example of a program. However, it is too simple it just displays a circle. The circle doesn’t move, none of the knobs change anything, and the circle isn’t even centered. With very little work I was able to center the circle, and change it so that the size of the circle was controlled by the volume of the audio. Likewise, I was able to get knob four to control the color of the circle, and the fifth knob to control the background color.

However, I wanted to create a program that used all five knobs on the EYESY. I quickly came up with the idea of using knob two to control the horizontal positioning, and the third knob to control the vertical positioning. I still had one knob left, and only a simple circle in the middle of the screen to show for it. I decided to add a second circle, that was half the size of the first one. I used knob five to set the color for this second circle, although oddly it does not result in the same color as the background. Yet, this still was not quite visually satisfying, so I set knob one to set an offset from the larger circle. Accordingly, when knob one is in the center, the small circle is centered within the larger one. As you turn the knob to the left the small circle moves to the upper left quadrant of the screen. As you turn the knob to the right the smaller circle moves towards the lower right quadrant. This is simple, but offers just enough visual interest to be tolerable.

import os
import pygame
import time
import random
import math

def setup(screen, etc):
    pass

def draw(screen, etc):
	size = (int (abs (etc.audio_in[0])/100))
	size2 = (int (size/2))
	position = (640, 360)
	color = etc.color_picker(etc.knob4)
	color2 = etc.color_picker(etc.knob5)
	X=(int (320+(640*etc.knob2)))
	X2=(int (X+160-(310*etc.knob1)))
	Y=(int (180+(360*etc.knob3)))
	Y2=(int (Y+90-(180*etc.knob1)))
	etc.color_picker_bg(etc.knob5)  
	pygame.draw.circle(screen, color, (X,Y), size, 0)
	pygame.draw.circle(screen, color2, (X2,Y2), size2, 0)

While the program, listed above, is very simple, it was my first time programming in Python. Furthermore, targeting the EYESY is not the simplest thing to do. You have plug a wireless USB adapter into the EYESY, connect to the EYESY via a web browser, upload your program as a zip file, unzip the file, and then delete the zip file. You then have to restart the video on the EYESY to see if the patch works. If there is an error in your code, the program won’t load, which means you cannot trouble shoot it, you just have to look through your code line by line and figure it out. Although, I learned to use an online Python compiler to check for basic errors. If you have a minor error in your code the EYESY will sometimes load the program and display a simple error message onscreen, which will allow you to at least figure where the error is.

I’m very pleased with the backing track, and given that it is my first program for the EYESY, with the visuals. I’m not super pleased with the audio from the Organelle. Some of this is due to my playing. For this experiment I used a very limited set of pitches in my improvisation, which made the fingering easier than it has been in other experiments. Also, I printed out a fingering chart and kept it in view as I played. Part of it is due to my lack of rhythmic accuracy. I am still getting used to watching the screen in PureData to see what meter I am in and what the current beat is. I’m sure I’ll get the hang of it with some practice.

One fair question to ask is do I continue to use the WARBL to control the Organelle if I consistently find it so challenging? The simple answer is that consider a wind controller to be the true test of the expressiveness of a digital musical instrument. I should be able to make minute changes to the sound by slight changes in breath pressure. After working with the Organelle for nine months, I can say that it fails this test. The knobs on the Organelle seem to quantize at a low resolution. As you turn a knob you are changing the resistance in a circuit. The resulting current is then quantized, that is its absolute value is rounded to the nearest digital value. I have a feeling that the knobs on the Organelle quantize to seven bits in order to directly correspond to MIDI, which is largely a seven bit protocol. Seven bits of data only allow for 128 possible values. Thus, we hear discrete changes rather than continual ones as we rotate a knob. For some reason I find this short coming is amplified rather than softened when you control the Organelle with a wind controller. At some point I should do a comparison where I control a PureData patch on my laptop using the WARBL without using the Organelle.

I recorded this experiment using multichannel recording, and I discovered something else that is disappointing about the Organelle. I found that there was enough background noise coming from the Organelle that I had to use a noise gate to clean up the audio a bit. In fact, I had to set the threshold at around -35 dB to get rid of the noise. This is actually pretty loud. The Volca Keys also requires a noise gate, but a lower threshold of -50 or -45 dB usually does the trick with it.

Perhaps this noise is due to a slight ground loop, a small short in the cable, RF interference, or some other problem that does not originate in the Organelle, but it doesn’t bode well. Next month I may try another FM or additive instrument. I do certainly have a good head start on the EYESY patch for next month.

Sabbatical: Week 16 Update

It’s finished. I completed the final mixes of 737, 727, and 747. I uploaded them to Bandcamp. Sales from that initial album release put me into a higher category of Bandcamp artists, where now I can include up to 300 MB of bonus material with every album. This will allow me to create nice pdf liner notes for the albums I have already created. I also submitted the album to DistroKid, which distributes the album to Spotify, Apple Music, Amazon Music, as well as others. The album went live on Spotify and Apple Music within 24 hours while Amazon Music took an additional day.

The work for the album is not entirely finished, as I have some promotion to do. I also plan on making a nice set of pdf liner notes. Finally, I have to setup a couple of events for Spring 2024 to share my work during the sabbatical, and to promote the album. However, I have accomplished everything I set off to do in my sabbatical proposal. Accordingly, this will serve as the final entry in my sabbatical reports. For the sake of convenience, I will also be linking all of my sabbatical updates below so that they can all be accessed from a single page. Thank you for coming on this journey with me, and I hope you enjoy the album.

Introduction
Week 1
Week 2
Week 3
Week 4
Week 5
Week 6
Week 7
Week 8
Week 9
Week 10
Week 11
Week 12
Week 13
Week 14
Week 15

Sabbatical: Week 15 Update

I got a respectable amount of work done this week. I completed the final mixes for five movements: A300, DC-8, 707, DC-10, and DC-9. I also revised the final mix I created for TriStar based upon feedback from Ben (Slash Gordon). While this may not numerically seem like much work in comparison to previous weeks, the mix process has been taking quite a while. Thirty-two tracks are a lot to balance! Ultimately I am well on schedule to complete the project by the end of week 16.

I haven’t given an update on my progress on my next album project since week 12. I have completed the first draft of the algorithms that will generate the accompaniment (synths and drum machine) for a third of the album. I am currently working on a prototype of the algorithm that will generate accompaniment for another third of the album. I am guessing I have about one more week of work on this algorithm.

Since I’ve shared the final mix of 707, I figured I’d re-share the string quartet arrangement for those who want to follow along. The B section uses notes from the D harmonic minor scale, while the final section uses only the dominant (A). I’m pleased with the unique rhythmic solution to the end. I wanted each instrument to slowly arpeggiate through every octave of the note A that they can reasonably play. Given where each instrument starts, you come up with a different number of notes. Rather than have every instrument move at the same time, I spread them out, moving most of them from note to note at staggered times. This spreads out the motion a bit.

Sabbatical: Week 14 Update

It has been a notable week in the production project. I recorded 13 phrases for Taishogoto, and these recordings marked the end of the recording process for the album. I’m now moving into the final mixdown process. You may notice that I’ve only recorded 14 Taishogoto phrases as opposed to the normal 27 phrases (three phrases for each of the nine movements). I’ve really intended the Taishogoto phrases to balance out the movements. Not every movement includes piano, and not every movement features three xylophone or three timpani phrases. The Taishogoto phrases sort of balance things out a bit, and let a bit of variety in terms of instrumentation. I recorded two phrases each for 737 & DC-9, and three phrases each for DC-8, 707, and 747.

I feel like I should give a bit more of a note about playing the Taishogoto. In case it wasn’t clear from last week’s entry, the Taishogoto is a monophonic instrument (you can only play one note at a time. The typewriter style keys on the instrument press down the strings to produce a higher pitch. One outcome of this is that the lower notes of the instrument are physically spread out quite a bit, while the high pitches can be pretty close together. That means the large leaps in the low range are trickier, and sometimes impossible to perform smoothly. The other thing that is happens due to the way the instrument works is that if you press two keys at the same time, the higher note will override the lower note. Thus, as you practice the instrument you learn that if you want smooth motion between notes, it works well to preset your left hand pinky on the lowest note in a figure, to use the other fingers of the left hand to play the higher pitches in the figure.

I was able to get a head start on mixdown process, creating what I expect to be a final mixdown of TriStar. I had naively thought that I’d be able to do the mixdown in an hour or so. I’m guessing the my mixdown process was more like three or four hours. I’ve been doing rough mixes all along, so I guess it could have been a much longer processes. I find creating final mixdowns a somewhat frustrating experience. There’s a lot of comparisons with very subtle differences. It’s kind of like when you go to the optometrist, and there’s a lot of “which is better, A or B? B or C? B or D?”

For each of the three sections (beginning, middle, and end), I first made sure that the panning was what I wanted it to be. Then I’d listen to that section over and over, and made notes about what I thought was too loud, and what was too quiet. I’d then bring down the volume on the material that was too loud, and kept repeating that process until I didn’t feel there was anything that was too loud. This often left me with quite a bit more headroom than I had before. I then went through each phrase, and turned up material that I felt was still too quiet, and made sure that the most important material was really out in front and prominent. Then I’d listen to the whole section again. Once I did this with all three sections, then I’d listen to the movement as a whole. I imagine the final mixdown process may take a couple of weeks.

Since I shared the audio for DC-9, I figured I’d share the score for the string quartet part for the movement. This movement features some exotic scales. While movement is nominally in G, the B section uses the scale G, Ab (G#), Bb (A#), B, D, and F#. This scale has few possibilities in terms of traditional major and minor triads, including only G minor, G major, and B minor. The scale for the A section is even more limiting, featuring G, G#, A, B, and F#. In all honesty, using this scale feels more like it is in B, (B, F#, G, G#, and A), as you get a fifth between B and F#. From this scale you get no triads at all, but you do get a G major seventh chord missing the fifth, and a B minor seventh chord that misses the third. I feel that this yields some really cool harmonies when you start to use four notes at a time.

Sabbatical: Week 13 Update

I am pretty much where I hoped I’d be at this point for the past couple of weeks. I recorded seven phrases this week. While this may not sound like much, it is pretty good given that it has been a three day work week due to Thanksgiving. These six of the phrases were on the electric cello for middle sections of TriStar, 737, A300, 727, DC-10, and DC-9. This allowed me to complete my electric cello recordings. The one additional phrase was a middle section phrase for DC-10 on an alto Taishogoto.

The Taishogoto, also called the Nagoya Harp, was invented by musician Goro Morita in 1912. The instrument uses a typewriter like mechanism to change the pitch of a series of identically tuned strings, which are typically strummed with a plectrum. Some instruments also feature one or more drone strings, often tuned an octave lower. The instrument I have, manufactured by Suzuki, is an Alto Taishogoto with no drone. This instrument does have four strings, one of which is pitched an octave lower than the rest. Modern Taishogotos, such as the one I have, are usually setup as an electric instrument, featuring a volume and tone knob, as well as a standard 1/4″ audio out. Having the instrument electrified makes it an excellent option for pairing with guitar effects pedals. I however, recorded the instrument dry so I may choose my pick of effects in LogicPro during the mix process.

I do not plan on recording Taishogoto on every movement. After recording some phrases for a variety of movements next week, I plan on moving over to putting together the final mixes starting during the end of next week. Since I shared the new mix of A300, I will reshare the score for the string quartet for those who want to follow along. This movement is in B minor / dorian, with the notes B, C#, D, F# and G# used during the middle section and B, C#, F#, and A# used in the beginning and end sections. I particularly like the end of this excerpt, with the first violin moving down to the dominant (F#), while the second violin settles on the tonic (B), ending on an open fifth.

Experiment 8: Bass Harmonica

For this month’s experiment I created a sample player that triggers bass harmonica samples. I based the patch off of Lo-Fi Sitar by a user called woiperdinger. This patch was, in turn, based off of Lo-Fi Piano by Henr!k. According to this user, this was based upon a patch called Piano48 by Critter and Guitari.

The number 48 in the title refers to the number of keys / samples in the patch. Accordingly, my patch has the potential for 48 different notes, although only 24 notes are actually implemented. This is due to the fact that the bass harmonica I used to create the sample set only features 24 pitches. I recorded the samples using a pencil condenser microphone. I pitch corrected each note (my bass harmonica is fairly out of tune at this point), and I EQed each note to balance the volumes a bit. Initially I had problems as I had recorded the samples at a higher sample rate (48kHz) than the Organelle operates at (44.1kHz). This resulted in the pitch being higher than planned, but it was easily fixed by adjusting the sample rate on each sample. I had planned on recording the remaining notes using my Hohner Chromonica 64, but I ran out of steam. Perhaps I will expand the sample set in a future release.

The way this patch works is that every single note has its own sample. Furthermore, each note is treated as its own voice, such that my 24 note bass harmonica patch allows all 24 notes to be played simultaneously. The advantage of having each note have its own sample is that each note will sound different, allowing the instrument to sound more naturalistic. For instance the low D# in my sample set is really buzzy sounding, because that note buzzes on my instrument. Occasionally hitting a note with a different tone color makes it sound more realistic. Furthermore, none of the samples have to be rescaled to create other pitches. Rescaling a sample either stretches the time out (for a lower pitch) or squashes the time (for a higher pitch), again creating a lack of realism.

The image above looks like 48 individual subroutines, but it is actually 48 instantiations of a single abstraction. The abstraction is called sampler-voice, and each instance of this abstraction is passed the file name of the sample along with the key number that triggers the sample. The key numbers are MIDI key numbers, where 60 is middle C, and each number up or down is one chromatic note away (so 61 would be C#).

Inside sampler-voice we see basically two algorithms, one that executes when the abstraction loads (using loadbang), and one that runs while the algorithm is operating. If we look at the loadbang portion of the abstraction, we see that it uses the object soundfiler to load the given sample into an array. This sample is then displayed in the canvas to the left. Soundfiler sends the number of samples to its left outlet. This number is then divided by 44.1 to yield the time in milliseconds.

As previously stated, the balance of the algorithm operates while the program is running. The top part of the algorithm, r notes, r notes_seq, r zero_notes, listens for notes. The object stripnote is being used to isolated the MIDI note number of the given event. This is then passed through a mod (%) 48 object as the instrument itself only has 48 notes. I could have changed this value to 24 if I wanted every sample to recur once every two octaves. The object sel $2 is then use to filter out all notes except the one that should trigger the given sample ($2 means the second value passed to the abstraction). The portion of the algorithm that reads the sample from the array is tabread4~ $1-array.

Knob 1 of the Organelle is used to control the pitch of the instrument (in case you want to transpose the sample). The second knob is used to control both the output level of the reverb as well as the mix between the dry (unprocessed) sound and the wet (reverberated) sound. In Lo-Fi Sitar, these two parameters each had their own knob. I combined them to allow for one additional control. Knob 3 is a decay control that can be used if you don’t want the whole sample to play. The final knob is used to control volume, which is useful when using a wind controller, such as the Warbl, as that can be used to allow the breath control to control the volume of the sample.

The PureData patch that controls the accompaniment is basically the finished version of the patch I’ve been expanding through this grant project. In addition to the previously used meters 4/4, 3/4, and 7/8, I’ve added 5/8. I’d share information about the algorithm itself, but it is just more of the same. Likewise, I haven’t done anything new with the EYESY. I’m hoping next month I’ll have the time to tweak an existing program for EYESY, but I just didn’t have the time to do that this month.

I should have probably kept the algorithm at a slower tempo, as I think the music works better at a slower tempo. The bass harmonica samples sound fairly natural, except for when the Organelle seems to accidentally trigger multiple samples at once. I have a theory that the Warbl uses a MIDI On message with a velocity of 0 to stop notes, which is an acceptable use of a MIDI On message, but that PureData uses it to trigger a note. If this is the case, I should be able to fix it in the next release of the patch.

It certainly sounds like I need to practice my EVI fingering more, but I found the limited pitch range (two octaves) of the sample player made the fingering easier to keep track of. Since you cannot use your embouchure with an EVI, you use range fingerings in order to change range. With the Warbl, your right hand (lower hand) is doing traditional valve fingerings, while your left hand (upper hand) is doing fingerings based upon traditional valve fingerings to control what range you are playing on. Keeping track of how the left hand affects the notes being triggered by the right hand has been my stumbling block in terms of learning EVI fingering. However, a two octave range means you really only need to keep track of four ranges. I found the breath control of the bass harmonica samples to be adequate. I think I’d really have to spend some time calibrating the Warbl and the Organelle to come up with settings that will optimize the use of breath control. Next month I hope to create a more fanciful sample based instrument, and maybe move forward on programming for the EYESY.

Sabbatical: Week 12 Update

It has been a productive week, with 13 phrases recorded on electric cello. I recorded phrases for the beginning and end sections for TriStar, A300, 727, DC-10, and DC-9. I also recorded phrases for the center sections of DC-8, 707, and 747. This leaves only six electric cello phrases to record next week, although it is a short work week due to Thanksgiving. Last week I mentioned that I was thinning out the orchestra samples in some of the movements. This week I continued that process, thinning out A300, DC-8, 727, 707, and DC-10. In fact, I only have two more movements to thin out.

On the next album front, I now have three working algorithms from the first batch of six movements from the album. Between ME7ROPOL17AN 7RANSPOR7A71ON AU74OR17Y and Rotate, I’ve been having a lot of fun creating and releasing albums. Given that the next major album will not be ready for three or four years, my plan is to sneak in some lower stakes albums in the meanwhile. One of them may be an album of live performances of Rotate. On Tuesday I will be performing with the New London Drone Orchestra. Since I’ve been playing electric cello for the last couple of weeks, I figure I’ll play that instrument while I’m good and warmed up. I’ll be running the instrument through a bunch of effects, and I hope to record the audio of my contribution to the performance. Assuming all goes well, I may continue to perform with the group a couple of times every year. I may be able to take my recordings from those performances to put together an ambient album.

Since I’m sharing the audio for 727 this week, I figured I’d include the score to the string quartet part for those who want to follow along. At rehearsal H, only the notes A, C, D, and E are used. These notes work out really well for bowed strings, as they’re all open strings on one or more of the instruments. This allows me to use harmonics, a favorite musical sound of mine, for the last four measures.

Sabbatical: Week 11 Update

This week I managed to mix and incorporate the Jetliner String Quartet recording for 737. I also was able to record eight phrases on my electric cello. All in all, I recorded two phrases for each of the following: 737, DC-8, 707, & 747. While I haven’t played cello regularly in nearly 40 years, I find that I am slightly better at it than playing the trombone. I’m still not particularly good at playing the instrument, but if you slap a bunch of effects on it, it does sound nice and spacey.

I’ve also been re-editing the string orchestra samples. One of the first things I did for the Rotate project was to add samples of Musiversal’s Budapest String Orchestra that I had recorded for my previous album. These samples were added right after the backing tracks were recorded. Accordingly, I added a lot of them, and now that the recordings are getting kind of thick, I want to thin out the string orchestra samples so they do not compete as much with the string quartet recordings. I managed to thin out TriStar and 737 in this manner. All in all, it was a decent amount of work accomplished for a week in which I was driving to tech rehearsals in Andover, MA for more than half of the week. It puts me a bit ahead of the game in terms of what I hope to accomplish next week.

Since I’ve been posting teasers related to the next album project over the last couple of updates, I’ll share a bit more. I’m pleased to announce that I have working algorithms for two of the six movements that I plan on recording the backing tracks for this coming summer. At the rate I’m crafting these algorithms, I could be ready to record those backing tracks sometime in early 2024. Regardless, I will start sharing examples from these algorithms in early in the new year.

As I posted a link to the new mix of DC-8 featuring the Jetliner String Quartet record, I’ll repost the string score for the movement. This is the only movement that uses quarter note arpeggiations. I’m also fond of the D# diminished chord over an E pedal at rehearsal C. I think it’s a particularly tasty harmony.

Sabbatical: Week 10 Update

I had hoped to post this on Friday, Saturday, or Sunday, but it has been a busy time. The good news is that I got more work done than I had expected to. I mixed and incorporated the string quartet recordings for eight of the nine movements. Some of the movements had multiple usable takes, so in some instances I chose to double (or even in 1 instance, triple) track the string quartet recordings to thicken things up a bit. Ultimately, I was able to add string quartet recordings to TriStar, A300, DC-8, 727, 707, DC-10, DC-9, and 747.

This will be a busy week for me as I am in tech week for a production of A Wrinkle in Time up in Andover. That being said, I expect to be able to complete the final string quartet mix, and to be able to get started recording some electric cello, which will put me a bit ahead of schedule. Since I have little to share this week, I’ll share a bit more about my next album project that I unveiled last week.

My plan is to have the album consist of 18 tracks, which should make a good middle ground between ME7ROPOL17AN 7RANSPOR7A71ON AU74R17Y‘s lots of short tracks approach, and Rotate‘s a few long tracks approach. As is the case with Rotate, the drum machine and synth parts will be generated by algorithms written in PureData. However, there will be three different broad models for these algorithms, so that this forthcoming album will feature more variety. The plan is to record the backing tracks to 6 of the movements during the summer of 2024, another 6 (using different algorithms) during the following summer, and a final 6 movements (using a third set of algorithms) during the summer of 2026.

Since I’ve released the recording of TriStar featuring the string quartet recording, I’ll re-share the score for the quartet for those who want to follow along . . .

Sabbatical: Week 9 Update

Well, it looks like I’m a Trombone Champ! I managed to record 7 trombone phrases this week, which is not a lot of work, but it is just enough for me to have finished all of the trombone recording I had wanted to do. Thus, I can put the trombone back up in the attic until my next major recording project. My embouchure probably only improved marginally over the two and a half weeks of recording. I think if I plan on recording much again with the instrument, I should take it out a few weeks before I plan on starting the project, and get my lip in better shape.

Three of the phrases I recorded were for 727. The remaining four phrases were trombone recordings for the center sections of A300, DC-10, DC-9, and 747. While I’m ahead of where I thought I’d be last week, it doesn’t really change the schedule much. Tomorrow I will be recording a string quartet in Providence. These recordings will span the transition from the center section to the final section of each movement. I will likely be editing and mixing these recordings over the next couple of weeks. I’m expecting that time frame as the next couple of weeks will be busy for me. I’ll be taking Thursday and Friday off next week to go to an event in Boston. The following week I will be going into tech week for a production of A Wrinkle in Time in Andover. This means that much of next week will be spent finalizing my sound design work for the production, so I may get little to no work on Rotate done next week. Accordingly, the new schedule for the rest of the semester is . . .

Week 10: Edit / mix String recordings
Week 11: Edit / mix String recordings
Week 12: Cello
Week 13: Cello
Week 14: Taishogoto
Week 15: Taishogoto

I had mentioned last week that I may have some information to share regarding progress on a closely related project I have been working on. Since the next couple of weeks may also be light weeks I won’t share everything all at once, so I have things to share for first half of November. That being said, I’ve made significant progress on the plan for my next studio album as Darth Presley. I plan on taking three to four years to complete the next one.

While I’m proud of my work on both ME7ROPOL17AN 7RANSPOR7A71ON AU74OR17Y and Rotate, I feel like both albums are a bit too consistent. Every movement of each of the two projects is very similar, and vary mainly in tempo, pitch collection, and sometimes instrumentation. This is why I want to spend more time on the next studio album. I have some other material I can likely release in the next few years on the side: songs with lyrics, live recordings from Rotate, and other material. While I have more work to share about the next project, I’ll save it for the next couple of weeks.