Digital Innovation Grant: October 9th, 2025

So my previous two experiments got me excited to start working on the music videos for my forthcoming album Monstrum Pacificum Bellicosum. I will likely start releasing music from this album in April 2026 on Bandcamp, and hope to release the entire completed album on platforms such as Spotify, Apple Music, and Amazon Music in late December 2026. Once the album is released, I will also be releasing music videos for the individual tracks from the album.

That being said, I have already started working on creating rough drafts of the music videos for some of the tracks from the album. In January, the Carole-Calo Gallery will be hosting a a show of Stonehill faculty work. I hope to have first drafts of some (possibly all) of tracks from the album ready to go for inclusion in the faculty show in January. Likewise, Bleep Blorp 2026 will be on April 18th at Stonehill College. I hope to including some (possibly all) of the music videos for the album in a gallery show for Bleep Blorp 2026 as well.

While I have had significant problems with controlling the Hypno through MIDI, I decided to control the video this way. I settled on this approach for two reasons. First, when paired with Pure Data, this approach offers precise timing, allowing the video to be well synchronized with the audio. Secondly, using MIDI allows me to control as many parameters at once as I want.

In comparison to my previous experiment, I wanted to art direct my work on this a bit more. The first decision I made was to keep the video processing far simpler for faster tracks on the album, and gradually move towards more intensive processing for slower tracks. This simplicity versus complexity is achieved in three ways. Fewer parameters are controlled in the simpler, faster tracks. I also restrict the range of outcome for parameters in the faster, simpler tracks. Finally, as the tracks become slower, and feature more complex processing, I also add in more possibilities for source video.

Monstrum Pacificum Bellicosum consists of 18 tracks, at nine different tempos. To put this another way, each tempo is used on two different tracks. Accordingly, each level of processing complexity is used for two tracks from the album.

The source video for Monstrum Pacificum Bellecosum all come from the 1925 silent film version of Sir Arthur Conan Doyle’s The Lost World. This film was the Jurassic Park of its day, featuring plenty of high quality stop motion animation of dinosaurs. I edited down to the stop motion animation of the film, and put this segments into 40 second phrases. resulting in 25 video phrases, although a couple of these phrases include non-stop motion shots to round out the 40 second phrases.

I selected the nine most active of these 25 video phrases to be the source material for the fastest tracks, both of which are at the tempo of 132 beats per minute. For each successive increase in complexity, each of which comes with a decrease in tempo, I will add two more of the original 25 video phrases. In addition, for each increase in complexity, I will also add seven more 40 second phrases comprised of footage derived from the previous tier of complexity.

Another aspect that is used to create a sense of increasing complexity is the number of parameters manipulated. A video phrase is selected randomly at the beginning of each phrase for all videos. Likewise, the feedback mode is changed at the beginning of each phrase. For the fastest tracks, those at 132 bpm, the PureData algorithm only changes four additional parameters. These parameters include the master hue (color shift), the master gain, as well as the frequency for both oscillators A and B. Remember from a previous entry that frequency would be best understood as zooming in and out. At the beginning of every phrase, the algorithm changes which parameters will be changed in the phrase, with every parameter having a fifty percent chance of being selected.

Figure 1: The first 3 levels of complexity

ComplexityTempoP1P2P3P4RangeT#T#
1132ColorGainFreq AFreq B4816
2120C. Offset AC. Offset BSat. ASat. B8515
3112Rotate ARotate BPolar APolar B16618

I also control the complexity of the manipulation by controlling the range of variation for the parameters. Initially I focus on changing parameters by using what is essentially a low frequency oscillator set to a triangle wave. For the fastest tracks, I have the period of this triangle wave set to repeat every quarter note. Because the algorithm is sending out new values every sixteenth note, that means that the parameters stay within a range of four values. Because MIDI is a seven bit protocol, that is the values range between 0 and 127, I center this range on 64 (the center value), save for the instance of the parameter of gain. When you put the gain control in the center, you are actually at a gain level of 0, as the lower half of the settings are negative gain levels, while the upper half are positive gain levels. So for the gain, I center this range on 96, which is halfway up on the positive gain levels.

To date, I have created first drafts for six videos, or three levels of complexity. The second level of complexity is for the two tracks at (or around) 120 bpm. To these tracks I also start manipulating the color offset and saturation levels for each of the two video oscillators. The period for the LFO triangle wave controlling these parameters is set to a half note, thus yielding a range of eight values for these parameters.

For the third level of complexity, the videos at or around 112 bpm, I added the parameters of rotation and polarization for each of the two video oscillators. Initially I had intended to have all twelve parameters to be controlled by a LFO triangle wave, but I found that the Hypno just couldn’t handle that much data, and started to make uncontrolled parameter changes. Accordingly, I only use the LFO triangle wave to control the four new parameters (rotation and polarization). The period for these triangle waves is a whole note, yielding a range of values of 16.

The remaining parameters, that is the parameters introduced in the previous two levels of complexity (color, gain, frequency, color offset, saturation), are randomly changed every beat. Again, these changes only occur when that parameter is set to change during a given phrase. To match the complexity of the other four parameters, the range for these random changes is 16.

Figure one is a basic summary of the first three levels of complexity for the videos for Monstrum Pacificum Bellicosum. The final two fields in the table are the track numbers that correspond to that complexity level. While I have completed drafts of these four tracks (5, 6, 8, 15, 16, and 18), I will not be sharing them early, as I want them to be a surprise for both the gallery show, as well as when I ultimately post them online when the album is complete. That being said, I do want to leave readers with some visual fun, so I’ve included some screen shots from the first draft of some of these six videos. In the interest in highlighting how these parameters can create variety, I have selected screen shots that are all manipulations of the same image of a pterodactyl.

I plan on doing more updates as I complete more videos. In this report I have made the decision to not include any PureData code, as all the code I’m using to create these videos is essentially derived from code I have already presented.

Screen shot from video draft for Windghost (monster 5)

Screen shot from video draft for Windghost (monster 5)

Screen shot from video draft for Windghost (monster 5)

Screen shot from video draft for Megalodon (monster 15)

Screen shot from video draft for Moonbeast (monster 16)

Screen shot from video draft for Rukarazyll (monster 18)

Digital Innovation Grant: July 29th Update

It was a fairly short jump from my previous experiment to this one. For the music video for 21AC3627AB4, I focused on using constant, smooth changes of selected parameters to manipulate video footage using the Sleepy Circuits Hypno. For this experiment, I wanted to focus on abrupt parameter changes in order to create a sense of visual rhythm. In order to achieve this, there were very few changes I had to make to the Pure Data algorithm I used to control the Hypno.

The overall algorithm for creating the video for 5B4C31 is nearly identical to the algorithm for creating the video for 21AC3627AB4. The right hand branch off of the counter contains a mod (%) four object followed by a sel 0 object. This causes the subroutines below to execute once every four sixteenth notes, or to put it another way, once every beat.

As was the case with the previous experiment, each of the makeautomation subroutines is largely the same, so we’ll look at the first one. This version of makeautomation is far simpler than the one used in the previous experiment. This subroutine is designed to send controller values for the first eight parameters used in the experiment. Each column is largely identical to the rest of the columns. The spigot statement controls whether or not a bang is sent to the random 128 objects below. When the value stored in parameters is zero the bang does not occur, when it is one the bang happens. The random 128 object outputs a value between 0 and 127 inclusive. This takes advantage of the full spectrum available within MIDI specifications. This value is then sent to the Hypno using a ctlout object.

Screenshot

The video input used for this experiment was the video for the next two tracks on Point Nemo, namely 21AC3627AB4 and 87ED21857E5. The video for 21AC3627AB4 is more chaotic in nature than that of 5B4C31. With the abrupt changes, I had though that the latter video would be more chaotic. However, it is likely the tempo change (the former is at 180 beats per minute, while the latter is at 60 beats per minute), that is one of the main contributors to the somewhat calmer flow of 5B4C31.

The sudden changes and the slower tempo require far less MIDI data to be sent to the Hypno, which resulted in far fewer glitches in terms of settings changes that were not specified by the algorithm. This was definitely a big problem with the previous experiment. In fact, there was only one time where one of the oscillator changed shapes, which I manually put back on the video input shape.

I purposefully created the videos for Point Nemo in reverse order, as I wanted the videos to become more simple as the album progresses, having the source video, late afternoon sunlight glistening on waves, gradually reveal itself over a one hour experience. Given that I know what the source video is, I can occasionally see glimpses of it in the first two movements. That being said, there is a considerable step down in complexity between the second and third movement.

Another thing I discovered in this experiment was that many of the visual beats came out quite dark. This can be due to any number of factors, including bad settings for color, for gain, for frequency (zoom) and the like. In order to deal with this, I recorded twice as much video as I needed to, and then simply edited out the dark sections. This required a couple of hours of work, which while somewhat defeats the goal of creating a music video in real time, a couple of hours to create a music video is a fairly easy work load. For future endeavors, I could limit the range of values to avoid such poor settings, or I could come up with algorithms to reject certain results.

In fact, this experiment successful enough that I have devised a rough plan for creating music videos for my forthcoming album, Monstrum Pacificum Bellicosum, which is due out at the end of 2026. Accordingly, I intend to start working on the videos relatively soon. The first part will likely be to find footage to manipulate. The current plan is to find footage from public domain and silent films.

I am fairly pleased with the numbers of streams of the Point Nemo videos on YouTube (5B4C31: 15, 21AC3627AB4: 90, 87ED21857E5: 213, and 364F234F6231: 118), particularly considering that the first video has only been live for less than 24 hours. While those statistics may not seem earth shattering, with the exception of the stats for 5B4C31, these surpass the streams those pieces have had through bandcamp and distrokid, so the videos are definitely helping with the visibility / distribution of the music.

I am considering a public showing the Point Nemo at Stonehill College, and I definitely plan on doing a live concert using the Hypno. However, my elder cat has a lot of health care issues, and so I’m trying to figure out whether it would be better to schedule such events in the Fall or Spring semesters. I am also considering printing out selected individual frames from the Point Nemo videos, and treating them as 2D art pieces. Since I do plan on using the Hypno to create videos for Monstrum Pacificum Bellicosum, I plan on doing studio reports for my work on them, and treating them as part of my Digital Innovation Grant. That being said, it may be several months until I get to the point where I am ready to report on my work for Monstrum Pacificum Bellicosum.

Digital Innovation Grant: July 25th Update

The good news is that the Hypno features robust MIDI implementation. The bad news is that it may not be well implemented. The MIDI specification allows for 127 different continuous controllers, which could do things like changing the volume or panning of a sound. The Hypno makes use of 50 of these, which is far more than most MIDI devices that I am used to. In addition, there are 10 MIDI notes that are used to act as triggers for the Hypno.

I came up with an algorithm in Pure Data that makes use of 35 of the controller values, and one of the triggers. This is where the bad news comes in. When I run the algorithm there are unpredictable results. Namely two of the triggers I do not use are the triggers that change the shape of oscillators A and B. When I run the algorithm those oscillator shapes often change without being directed to do so.

I spent a lot of time troubleshooting the algorithm, including recording the MIDI data in Logic Pro (see below). The MIDI data is coming out of Pure Data exactly as expected. When I cut the algorithm back to sending just one set of values (for instance, the note trigger or one of the controller values), the Hypno responds as predicted, however, sometimes just adding in a second controller, the Hypno begins to change oscillator shapes without being prompted to. It may also be changing other values, that I am simply not able to track. My working theory here is that the Hypno cannot handle a rapid MIDI input of several controller values.

In fact, it was this problem, which I wasn’t able to solve in March that caused me to back burner my work on this project until the summer. Unfortunately the extra time and brain space afforded me by the summer did not yield different results. So, I decided to move forward with the algorithm as it is. My intent with this project was to have both oscillators using the video input shape from stored videos on a USB thumb drive (in this case, video from the previous two experiments, that is video from the final two tracks of Point Nemo). So, in practice, while I was recording the video output from the Hypno, whenever either oscillator shape changed, I changed it back to video input as quickly as I could. Accordingly, this resulted in a bit more chaos to an already chaotic video.

Alright, so let’s look at the algorithm in Pure Data. We see at the bottom left the part of the algorithm that sends the MIDI note out (it ends with a noteout object). In the upper left with see the initialize subroutine, which includes translating the tempo (180 bpm) into an amount of milliseconds. It also includes the initialization of an array called parameters. Finally, it also sets the MIDI channel to channel 16 (the Hypno is locked into channel 16, and it cannot be set to other MIDI channels). In the center of the screen, we see the primary algorithm. It includes a metronome, and a counter right below that. Then beneath the number object below the counter we see three different branches, one ending in a send object, the second ending in two subroutines labeled changevideo, and the final ending in six different subroutines labeled makeautomation.

We have seen various versions of all of these subroutines before. The first, which terminates in [send thenote] simply sends MIDI note number 123 to the Hypno once every four measures (16 sixteenth notes multiplied by four measures equals 64). This is the note number that toggles the feedback mode of the Hypno. The changevideo subroutines occur under a % 128 object. This causes these subroutines to execute once every eight measures (two times 64). The subroutine changevideo2 is similar as changevideo1, so let’s look at changevideo1.

The bulk of this subroutine is 30 instances of a random 2 object followed by a tabwrite parameter object. What this means is that for each of 30 parameters, which will map onto controller values sent to the Hypno, there is a fifty percent chance for that parameter to be set to either change or not change once every eight measures. The changevideo2 subroutine simlply contains the remaining five instances of the parameters.

On the left we see two random 2 objects followed by a * 100 object. In each case, this will result either in the number 0 or the number 100 being output. These are then sent to ctlout 37 or ctlout 46. These are the controller values that set the file index number for oscillators A and B respectively. To put it in other terms, this will cause the video input to toggle between the two video files stored on the USB thumb drive.

The six makeautomation subroutines are mostly similar to each other, so we’ll look at makeautomation1. This subroutine features eight instances of the same thing. It starts with some version of a mod statement (% 224) or a similar expr object. The number 224 corresponds to 14 measures of music. If we halve that to seven measures the number of sixteenth notes is 112. So why am I dealing with seven minute units here? MIDI controller values range between zero and 127. The number 112 is the largest multiple of 16 that is still less than 127. Each successive expr object increments the incoming value by four before modding the value. This is done so that parameters are not set to the same value.

After the mod or expr object we see a moses object. When the value coming into the left inlet is below the stated value, in his case 112, that value is passed to the left outlet. If it is equal to or above that value, it passes out of the right outlet. This allows us to insert an expr object that subtracts the value from 224. Thus, this will create a motion from 112 down to 0. Thus, what this subroutine does is create incremental parameter changes that result in triangle wave forms. This can be confirmed by looking at the recorded MIDI data in Logic Pro.

After this the value then passes through a spigot object. Here the spigot is being controlled by a value from the parameters array. It that value is 0, the value going into the left inlet does not pass through. When that value is 1, the value at the left inlet will be passed to the outlet. Accordingly, we can see how the subroutine changevideo turns these spigots on and off. The values are then sent to the ctrl objects below.

The subroutine makeautomation1 handles the controllers for the color and the master gain. It also controls the values for frequency, rotation and polarization for both oscillators. The subroutine makeautomation2 handles controllers for oscillator A, including drift, color offset, saturation, fractal axis, and fractal amount. The subroutine makeautomation3 handles the same parameters as the previous subroutine, but for oscillator B. The subroutine makeautomation4 handles values related to feedback including: feedback hue shift, feedback zoom, feedback rotation, feedback X axis, and feedback Y axis. It also handles the controllers for feedback to gain for both oscillators. The subroutine makeautomation5 handles parameters for video manipulation of oscillator A, including X crop, lumakey, Y crop, lumakey max, and aspect A. The final makeautomation subroutine handles the same parameters as the previous subroutine, but for oscillator B.

As an experiment, this video was a success in the sense that it answered the question, can the Hypno handle a lot of MIDI input of controller values. That answer is clearly no. However, while the Hypno did not respond as directed by the algorithm in Pure Data, I cannot say that the results are not satisfying, though they are a considerable leap more chaotic that the results of the previous two experiments. I will be doing one more experiment with driving the Hypno using MIDI data, and will likely continue to use MIDI data when performing with the Hypno. However, when making music videos I will likely use Eurorack control voltages to drive the Hypno, as the results are far more predictable and controllable.

Part of this experiment was to see the outer limits of using MIDI with the Hypno. As already noted, this yielded fairly chaotic results where the source material is almost completely unrecognizable. In practice it seems that I will get more aesthetically pleasing results if I rein in the controller changes, yielding a more subtle response.

Digital Innovation Grant: July 18th Update

Wow. It’s been a long time. An embarrassingly long time. Some of my excuses are typical academic excuses: busy spring 2025 semester, conference presentation due date, and busy summer 2025 semester. However, a couple of them have been personal: my mother had a stroke and my elder cat has had a lot of health problems, and is likely in his last months.

Another thing holding up the process was the final funds. I received the final funds in June, and was able to purchase the remaining items. However, I altered the list a bit, as it seems the primary student use of the video synthesis setup will be students in VPM 248: Sound Synthesis. Accordingly, I purchased some Eurorack gear that will work in combination with the Sleepy Circuits Hypno, as well as some accessories.

In terms of the accessories, I bought a decent USB webcam with manual zoom and manual focus, a tripod, a USB HDMI capture card, a few of USB cables and two USB thumb drives. The camera is a MOKOSE HD webcam with a 5-50mm zoom lens. The lens is a CS-mount lens that can be removed and replaced with other CS-mount lenses.

The Eurorack setup is perhaps of greater interest. In the previous entry, I went into detail how Eurorack modules can be used to control the settings of the Sleepy Circuits: Hypno. The setup includes a Behringer 960 sequential controller, a Behringer CP3A-M mixer, and a Behringer Four LFO enclosed in a Cre8audio NiftyCASE. In addition to these items I purchase a set of 5 patch cables, and some rack screws.

The NiftyCASE provides power to the three modules, but also allows for MIDI input via 5-pin DIN, as well as USB. Any MIDI input is then converted to control voltages by the NiftyCASE. The Four LFO contains four low frequency oscillators each of which can output sine, square, sawtooth, ramp, triangle, or trapezoidal waves at frequencies that are suitable to for controlling other modules.

The other two modules (the 960 and the CP3A-M) are recreations of modules from the Moog 900 series of the late 1960s and early 1970s. The 960 Sequential Controller features three rows of eight knobs, each of these knobs is used to set a different frequency. These set frequencies are then played back, looping from the left to right. To the right of these knobs are control voltage outputs for each row. These voltages can then be used to control other modules (for instance the Hypno). In addition to having a speed control, the 960 allows each of the 8 steps to be in play, skip, or stop mode. In play mode, the step is played, and then moves onto the next step. In skip mode, that step is skipped. In stop mode, when that step is hit, it repeats until it is switch to either play or skip mode.

The CP3A-M is a mixer, but it has a few nice features. If you use only one input, you can use it as an attenuator, that is as a module that can decrease the amplitude of a signal. Likewise, since it has both positive and negative outputs, it can be used as a signal inverter. Finally, in addition to being a four channel mixer, it contains two sets of multiples. Multiples can be used to split a given signal into two or three duplicate copies of the same signal.

Starting in spring 2026, students enrolled in VPM 248: Sound Synthesis will be required to create a music video for their first project using the Hypno in conjunction with the above Eurorack setup. Examples will be presented in the music technology showcase. One additional benefit of this Eurorack setup is that it will also work in conjunction with several of the synthesizers resident in The Song Factory, the music program’s recording studio.

In addition to incorporating this gear into VPM 248 I still have more work outstanding for the Digital Innovation grant. I have to finish music videos for the first two tracks on my album Point Nemo. I also have to organize a concert of live music that will feature my student assistant performing video live using the Hypno in conjunction with the Eurorack setup. I hope to get the two videos done before the beginning of Fall 2025, and I imagine I may be able to schedule the concert for mid November 2025.

Digital Innovation Grant: March 14th Update

Well, color me excited. Spoiler alert, this experiment was pretty darned successful for a few reasons. I’ll save those reasons for later in the post (that’s what we call a teaser in the business). The experiment at hand was to create a music video for the third track of Point Nemo using control voltages to automate settings on the Hypno. While my previous experiment also entailed using control voltages to automate parameters of the Hypno, that experiment focused on smooth flowing changes provided by sine waves.

For this experiment I focused on instantaneous changes primarily using a step sequencer. In particular, I used Behringer’s clone of Moog’s classic 960 sequencer (lower left in the image below). Since control voltages are literally just voltages that are applied to individual modules in order to alter a setting or parameter, a step sequencer is typically a series of knobs or sliders that are used to set individual voltages. These voltages can then be stepped through in sequence, potentially providing a melody or another repeating musical function.

The Moog 960 has three rows of eight knobs. Each row has two identical outputs. Accordingly, when the 960 is being used, it can output three different sequences of voltages. Furthermore, each row has a multiplier switch with three modes, 1x, 2x, and 4x. Thus, each successive setting generates a wider range of voltages. The speed of the 960 can also be controlled by sending individual gate signals to the shift control on the module’s lower right side. Since a gate signal is just an on or off signal, you can actually use a low frequency oscillator set to a square wave to provide a chain of gate signals. To make things even more complicated, I controlled the frequency of this low frequency square wave with a low frequency sine wave, resulting in the tempo increasing and decreasing in an undulating wave.

As mentioned in the previous experiment, that Hypno has seven CV inputs. The three outputs of the Behringer 960 covers three of these. I covered another two of them by using the same sample and hold setup that I mentioned in the previous experiment, including one being fed pink noise and the other being fed white noise.

The final two control voltages sent to the Hypno came from envelope generators (EG). Envelope generators are intended to provide sculpted sound settings that approximate the nature of acoustic sound. Most envelope generators reduce sound into four stages: attack (the time it takes for a sound to hit full volume), decay (the amount of time it takes for a sound to come down from full volume), sustain (the level at which a sound sustains when it is being held), and release (the time it takes for a sound to become inaudible).

Envelope generators are triggered by a gate signal. Fortunately, the 960 has gate outputs for each of the eight repeating steps of the sequencer. Thus, I attached the fourth gate to one EG and the eighth to another EG. I then fed each EG to its own attenuator to allow me to easily increase or decrease the effect of each. One of these two envelope generators was used to control the zoom control of oscillator B of the Hypno, which is very noticeable, causing the size of the shape generated by oscillator B to grow and decrease in a very rhythmic, predictable manner.

Since this experiment is focused on instantaneous changes, I decided to also use one of the two gate inputs for the Hypno. In particular, I fed yet another gate output from the 960 to the gate input for oscillator B of the Hypno,. This cycles the shape control for the second oscillator of the Hypno.

As stated earlier, I found the results of this experiment to be highly satisfying. First off, the use of the step sequencer provided a visually rhythmic result, which somewhat balances visual variety and consistency. This made improvising with the system a lot more manageable. The system itself provided most of the variety, leaving me to make an occasional tweak to keep things interesting.

For the video input I used a USB drive that had both the video that I used to create the previous experiment, as well as the video of that experiment. Thus, when improvising live with the system I regularly switched back and forth between the original (unprocessed) footage of the sun glinting off of waves of the ocean, and the heavily processed video of that footage that appears in the music video for the last track of the album. Thus, in this video the viewer can still frequently recognize the source of the video, namely the sun shining on waves. I also regularly changed the setting for the type of feedback used by the Hypno. Finally, I often found changing the overall hue and saturation levels of the Hypno to create satisfying results. To summarize what I’ve learned from this, if you use automation to control various parameters in an effective way, this greatly reduces the task of performing other changes live, allowing the improvisor to focus on just a few parameters.

The final satisfying element of this experiment is that I did not have a single connection drop in the HDMI cable, so I was able to record the entire video in a single pass, requiring minimal editing when assembling it into a music video. This really approaches my goal of being able to create a perfectly adequate music video nearly in real time. Now I can shift some of my focus in learning how to use the parameters of the Hypno in a variety of ways, so each music video can be somewhat unique rather than seeming like a cookie cutter of every other video I create using the Hypno.

Digital Innovation Grant: March 13th Update

While I am satisfied with my video for my improvisation on “The Star-Spangled Banner,” I recognize one potential area of improvement. Given that I have two hands I can, at most, turn two knobs on the Hypno at a time. However, given the complexity of the system, I am more likely to move only one knob at a time, particularly given that many parameters require the user to press one or more buttons while turning a knob.

Thus, it is useful to automate parameters on the Hypno to allow for complex, simultaneous parameter changes. Frankly, it is also worth learning how to do this in order to fully comprehend the potential applications of the system. There are two ways to automate settings on the Hypno. One is using control voltages (CV) from Eurorack compatible modular synthesizers. The other is through MIDI (Musical Instrument Digital Interface).

In December, 2024, I released an collaborative album of drone music titled Point Nemo. The album contains four tracks. My plan is to make music videos for all four tracks using the Hypno. For the final two tracks of the album I plan on automating parameters using control voltages, while the first two tracks I will automate settings via MIDI. I will create the music videos in reverse order, using footage from the previous video as the source. Accordingly, as the videos for the album progress, it should make a marked movement from heavily processed to less processed video, revealing a glimpse of the initial source video at the very end. I shot the source video at West Beach in New Bedford, shooting the late afternoon sun glistening off waves in the ocean. This source video relates directly to the title of the album, as Point Nemo is a location in the Pacific Ocean that is farthest from any existing land mass. Namely, it is equi-distant from Ducie Island in the Pitcairn Islands group, Motu Nui off the coast of Easter Island, and Maher Island which is off the coast of Antartica.

For the experiment at hand, I am using control voltages in conjunction with the Hypno to manipulate the source video, in turn creating video for the final track of the album, 364F234F6231. Before I get into the thick of it, let me return briefly to the topic of control voltages and modular synthesis. In the late 1960s, the earliest commercially available synthesizers were a series of individual modules that were packaged together. Each module typically performs a single function, and in order to make a sound of even the most basic level of sophistication, the user would have to route audio and control signals from several modules together using patch cables. The control signals would allow one module to control another through varying voltages. While such a means of making sound may seem antiquated and difficult, the complexity and nearly endless customization of such a system led to this approach, modular synthesis making a resurgence starting in the mid 1990s.

Perhaps suitably, the synthesizer I am using for this experiment is a modern clone of a modular synthesizer manufactured by the R. A. Moog Company in the late sixties and early seventies, the System 55. The instrument I have assembled over the years features, amongst other things, five oscillators and a step-sequencer. The Hypno features seven CV inputs as well as two gate inputs (gate signals are on or off signals that are useful for indicating things such as a key being depressed on a keyboard). For this experiment I only used the seven CV inputs.

Accordingly, I needed to feed seven control signals from my System 55 to the Hypno. Five of these come from the oscillators. All five oscillators were put in low frequency mode, that is the waveforms are slow enough that they do not produce audible pitches. Also, in all five cases I used the sine wave output. Three of these control signals were fed to attenuators, so I can easily reduce the amount of signal of each being sent to the Hypno. The other to sine wave signals were sent to voltage controlled amplifiers (VCAs). These VCAs can also be fed a control signal that automates how strong the signal is coming out of the module. That is, the control signal makes the volume louder or quieter. In this case, I controlled the VCAs using triangle wave outputs from two of the other oscillators. The output from the VCAs were then sent to the Hypno.

In this experiment I wanted to focus on smooth, flowing changes. Hence, my use of sine waves, which lends the resulting video a wiggly nature. However, in order to send voltages to all seven the Hypno’s CV inputs, I took a different approach with the remaining two signals. Namely I used the oscillator output of the step sequencer. This can be thought of as a gate signal that works like a metronome, providing a steady beat. This signal was sent to two different sample and hold circuits. While the dual sample and hold module I have in my setup is not technically from the line of System 55 clones, the original Moog System 55 did feature a sample and hold circuit.

So, what is a sample and hold circuit? You feed it an input, and a gate signal. Every time the module receives a gate signal, it samples the voltage at the input at that moment, and it holds the output signal at that level until the next gate signal is received. The most common usage of a sample and hold circuit is to feed it a noise signal, which is essentially random voltages. When used in this manner the module produces steady, but random voltages that change at a steady pulse.

That is exactly what I did here, except I fed one of the sample and hold circuits pink noise while I fed the other white noise. Pink noise has less high frequency content than white noise. The result of this difference is that a sample and hold circuit being fed pink noise will have a more limited range of output than one using white noise. Regardless, the two sample and hold circuits produce instantaneous (sudden) changes which were then sent to the Hypno.

Just as had happened with the making of my Star Spangled Banner video, the HDMI connection between the Hypno and my HDMI to USB capture card frequently broke. Thus, rather than generating a continuous twenty and half minute video, I generated five different videos ranging in length from about a minute and a half to ten minutes long. I generally arranged the video from most heavily manipulated to least manipulated. Throughout the video I feel that there are moments where one can recognize that the source video is sunlight glinting off of ocean waves. As stated earlier, my next video experiment will also use control voltages for automation, and will use the results of this experiment as the video input, so you’ll want to stay tuned for that.

Digital Innovation Grant: March 12th Update

Spring break has allowed me to catch up on work that I had back burnered a bit. I finished my draft of a Hypno manual . . .

Using the Hypno 1: Connections
Using the Hypno 2: Performance Mode
Using the Hypno 3: Modulation Mode
Using the Hypno 4: Feedback Modes / Feedback Modulation Mode
Using the Hypno 5: Using Input Shapes
Using the Hypno 6: Advanced Mode

Accordingly, I also feel like I have a good enough understanding of the Hypno to start creating videos. First, some context. For the last 75 years, music and music marketing has been increasingly reliant upon visual material. Due to American Bandstand (1952-1989), to Top of the Pops (1964-2006), to MTV (1981-), to YouTube (2005-), and TikTok (2016-), musicians and musical groups have been under increasing pressure to incorporate the visual into their music making routines.

This can be a problem for the independent musician. Creating visual content takes time, expertise, and often money. The more time a musician spends creating visual content, the less time they spend making music. Video synthesis is one potential solution to such a problem. Video synthesis typically takes place in real time, thus, creating a video, in theory, could take only the amount of time it takes to playback the music it will be accompanying. Furthermore, since video synthesis is based upon concepts of sound synthesis and to a lesser extent, performance, musicians are typically well versed in improvisation, and may also have an understanding of sound synthesis that gives them a basic level of expertise that can be built upon.

I recently released a recording of an improvisation based upon “The Star-Spangled Banner.” To create a video for the piece, I used public domain footage from four of Edison Studios films., including Raising Old Glory Over Morro Castle (1898), Statue of Liberty (1898), Parade of Marines (1898), and Morning Colors on US Cruiser Raleigh (1899). I used the resulting video file on both of Hypno’s video oscillators, and recorded the video output from a series of improvisations. One issue I had is that if I bumped the HDMI cable, it would momentarily break the connection between the Hypno and the HDMI to USB capture card I was using, causing the video recording to stop.

Ultimately I was able to record five video improvisations, lasting :43, 1:25, 1:35, 4:18, and 8:25. totalling 16:26. My recording of “The Star-Spangled Banner” lasts 11:47, so I cut each of the five videos into phrases. I shortened the longest phrases to be between 40 and 41 seconds long. Then I arranged the longest phrases from most identifiable to most distorted. Then I interspersed the shorter phrases arranging them from most distorted to most identifiable, resulting in the video below.

Using the Hypno 6: Advanced Mode

When not using input shapes, you may press the same combination of buttons (A & B for oscillator A or B & C for oscillator B) to set some advanced parameters. Some of these are the same as those used for input shapes. For instance, knobs E & F are used to set the luma key maximum (E) and minimum (F) of a given shape. Likewise, the farthest away of the top knobs (knob D for oscillator A and knob A for oscillator B) are used to squash or stretch the vertical dimension of a shape. However, when using the poly shape, this control is used to set the number of sides for the polygon. In this mode the three nearest knobs (knobs A, B, & C for oscillator A and knobs B, C, & D for oscillator B) are not mapped to any parameter.

Again, regardless of which oscillator is being adjusted, the slider on the left (A) performs a crop on the X axis, while the slider on the right (B) performs a crop on the Y axis. However, when the sine or tan shapes are being used, the slider on the left (A) adds extra modulation on the opposite access, while the slider on the right (B) adjusts the waveshape of the modulation.

Here’s a Sleepy Circuits quick guide describing some of the controls available in Advanced Mode . . .


video by Sleepy Circuits

The Hypno has still more features such as Eurorack patching, MIDI control, and presets. Further information can be found out about these features from the Sleepy Circuits website as well as on their YouTube channel.

Using the Hypno 5: Using Input Shapes

The Hypno will accept video via USB. This could be a USB webcam, an HDMI camera plugged into a HDMI to USB capture card, or a USB thumb drive that contains video files. Note that input shapes can be used on both oscillators simultaneously. You can change parameters affecting the live input by pressing two buttons. To do this for oscillator A, press buttons A and B, to affect oscillator B you would press buttons B & C.

The most fundamental of these controls is the input index and folder control. These controls are knobs A & B for oscillator A, and knobs C & D for oscillator B. The outermost knob controls the index (knob A for oscillator A and knob D for oscillator B). Moving this knob to the left most counter clockwise position can allow you to switch between two distinct video inputs. When using a USB drive including numerous video files, the inner most knob (knob B for oscillator A and knob C for oscillator B) navigates the folder, while the index (knob A for oscillator A and knob D for oscillator B) navigates the files. If you want to see the file names while you are navigating, you can first turn on help mode by holding buttons A & C down while turning knob F to the right past twelve o’clock. You can turn help mode off by holding down the buttons A & C while moving knob F to the left past twelve o’clock.

Help Mode Quick Guide

video by sleepy circuits

The third knob from the oscillator (knob C for oscillator A and knob B for oscillator B) are inactive in this mode. The farthest knob from a given oscillator (knob D for oscillator A and knob A for oscillator B) control the aspect of video, with the 12 o’clock position being normal. Moving the knob to the right stretches the video vertically, while moving the knob to the left squashes the vertical dimension of the video.

Regardless of which oscillator is being adjusted, the top center knob (knob E) controls the luma key max setting, while the lower center knob (knob F) sets the luma key minimum value. When the minimum is set higher than the maximum, the luma key values invert. In essence, luma key values allows you to make a portion of a visual image transparent, based upon the color value. Likewise, regardless of which oscillator is being adjusted, the left slider (slider A) performs a crop of the image on the X axis, while the right slider (slider B) performs a crop of the image on the Y axis.

Here’s a Sleepy Circuits quick guide for using video input . . .


video by Sleepy Circuits

Here’s a useful quick guide by Sleepy Circuits showing how to prepare video and image files for use on a USB drive . . .


video by Sleepy Circuits

Using the Hypno 6: Advanced Mode

Using the Hypno 4: Feedback Modes / Feedback Modulation Mode

To turn feedback on, you have to use the master gain setting in performance mode. Putting that gain in the center position (12 o’clock) effectively turns the image off. Turning the knob to the right adds positive gain, while turning it to the left adds negative gain. When you move past the center point (3 o’clock for positive gain and 9 o’clock for negative gain), you move beyond 100% gain, which introduces feedback. Pressing the center button (B) in performance mode allows the user to toggle between five different feedback modes. Each feedback mode is associated with a different LED color.

ColorFeedback Mode
RedRegular
Green Hyper Digital
YellowEdgy
TealStable Glitch
PinkInverted Stable

A silent video demonstration of the five feedback modes in Hypno.

You can then also adjust various feedback settings by holding down the center button (B), and adjusting the various silders and dials on the Hypno. I call this Feedback Modulation Mode. Moving the dial on the left (A), adjusts the rotation of the feedback. putting the dial to the left of center causes the rotation to move to the left, while moving it to the right of center causes the rotation to move to the right. Moving dial B adjusts the X offset of the feedback, with the center position being no offset. Thus, moving the dial to the left of center causes the feedback to move to the left, and vice versa. Moving dial C adjusts the Y offset of the feedback, with the center position corresponding to no offset, so turning the knob to the left moves the feedback down, while moving the knob to the right moves the feedback up. The dial on the right (D) adds modulation to the rotation of the feedback, with the center position being no modulation. Thus, moving the knob to the left of center causes the feedback to rotate counter clockwise, while moving it to the right of center causes it to rotate clockwise.

The upper two of the center knobs (dial E) zooms the feedback, with the center position corresponding to a 1:1 ratio. Moving the knob to the left zooms in, while moving it to the right zooms out. The lower of the two center knobs (dial F) creates a hue shift for the edges of the shapes. Because this is related to feedback, this can introduce a gradient effect. The two sliders adjust the amount of feedback that is sent back into the gain of the corresponding shape. Thus, slider A affects the gain of shape A, while slider B affects the gain of shape B. In order to do this, we need to first turn on cross modulation by pressing the button for the current oscillator (button A for oscillator A and button C for oscillator B). While holding this button, tapping the button for the other oscillator will toggle cross modulation on or off. This will be indicated by a green (on) or red (off) LED.

A silent video of most of the feedback modulation options in Regular Feedback mode..

A bit of experimentation is called for here in order to get an idea of what the possibilities are. That being said, making numerous setting changes in feedback modulation mode can be difficult to to undo, so, you might find it useful to restart the Hypno between trying out each of the Feedback modes.

Here is the Sleepy Circuits quick guide for Feedback Mode . . .


video by Sleepy Circuits

Likewise, here’s the Sleepy Circuits quick guide for Feedback Controls . . .


video by Sleepy Circuits

Here we find a quick guide by Sleepy Circuits which describes how to use button patching for cross modulation . . .


video by Sleepy Circuits

Using the Hypno 5: Using Input Shapes