jueves, 2 de marzo de 2017

Optical Synthmouses



If you are interested in making your own SinteMouse, here is a step-by-step tutorial I wrote for simplicity. 

Http://www.instructables.com/id/Bend-an-Optical-Mouse-to-hear-surface-textures/ 
The editors of instructables liked the tutorial, they put it featured on the main page and they gave me "pro membership" for a year :) 
Several thousand visits! Positive feedback! New ideas suggested by readers! Cool!



A 1 minute video of one of this hacked mouses in action

https://www.youtube.com/watch?v=luZazXOj9Mc


Idea:
This circuit bending materializes my original premise of "listening to the textures of objects" that triggered the development of SoundPaint. Program for Java, in Processing language that tries to fulfill the premise by digital means, with the possibilities and limitations of that medium. ( Http://ignaciodesalterain.blogspot.com/2011/06/soundpaint.html ) 


The "real" surfaces were still in silence. After an unsuccessful attempt to intervene a barcode reader pencils for this purpose. 

I made a question to myself while replacing an old wand barcode reader at work: "Can I hear the bar codes if I somehow connect the sensor of the pencil to a speaker?" "Similar to the hacked ball mouses, it would be a phototransistor of some kind connected to a speaker.." 

Then I said to myself, "how about listening to any texture on any surface?... Woooaahh!"



Research process: 
On December 13, 2011, I decided to continue researching my old optical mouse. I used it for years until I got a PS2, I changed it to free the USB port of my pc. 


 


When I removed the plate I saw that it has a hole down that shows the "belly" of the chip, with navel and everything. 

Behind the small hole there is a microcamera of very low resolution and high refresh rate. The pic shows the chip/board upside down, the hole in the board and the chip cover out, on the board.

I was investigating how it works. And how to intervene that camera to see the image on a pc, but it is complicated and the Instructable I read requires a specific chip. 

Http://www.instructables.com/id/Mouse-Cam/ 
Anyway this does not solve my premise. 

I found a Crazy Mouse that runs away when you want to grab it :) 

Http://www.instructables.com/id/Crazy-Mouse/ 

For gamers, you can add "rapidfire" to the mouse click with a 555, or 40106. 

Http://www.instructables.com/id/Add-a-rapid-fire-button-to-your-mouse-using-a-555-/ 

But I found nothing useful to my cause ... 

So I kept on opening my path, cutting bushes, marking trees. 

 

It's just the same pierced chip, it's "belly" and "back", without the cover and with it in place.

I pierced the chip with a wick of the diameter of the sensor, 5mm.
I put the cap back under the chip.
I glued the sensor in the hole on the top side, with the gun.
I cut the tracks that originaly fed current to the LED and weld cables and resistance of 270 Ohms. 

 

And so I did the first test in which I tried to "listen" to the newspaper. 

It did not work. 

The LED light interferes the sensor behind, above the board. 
The solution was to paint the entire back with matte black paint so that light only enters the hole under the mouse. 
Three times to avoid any transparency.

 


Now the problem was the reverse, I could not completely saturate the sensor with the little light coming through the hole after bouncing on the paper / wood / cloth etc ... 

The signal was barely audible, I connected it to the base of a transistor to amplify it (BC337).
At the output of the transmitter of the transistor is heard acceptably well ... 
You can see the 337 with its three legs between the sensor and the 9V battery in the photo below.





When I made it work I started to move the mouse over everything around me and listen. 
I accired a notion of "how a texture would sound", an interesting "pseudo-sinestesic" sensation so to speak. 
A power like those of the X Men or Neo watching the Matrix, you know. 
The cross on top of the mouse is exactly over the sensor, so I know where to aim, what am I hearing.

 


Scouting for the limits of the system I generated this image. 
Some patterns attempt to generate square waveform (-_-_-_-_), other sawtooth (/////). 
And sinusoid: the sinusoid oscillates rounded, similar to the movement of a pendulum, rises and falls gradually. 
I generated patterns on different scales to see how high and low the pitch of the sound can be. 
The device has a very fine resolution, I can hear the lines of a notebook sheet and thinner lines. Thin enough, I guess. Pierced surfaces like mosquito nets, combs or a protoboard are fun to hear. 
I can not hear well some reflective surfaces like glossy paper or enamelled ceramics. 
For some time I gave up on the background noise problem, the mouses work well enough to have fun with. 


22/05/2017 Update:

I recieved some light sensors that should be more sensitive than the ones I can get at local shops. The ST-1CL3H was a game changer for my project.

Using those I no longer need a +9V,0V,-9V power supply, just a 5v stabilized transformer will work fine.
No transistor, no operational amplifier.

Just the sensor plugged to the 5v with a resistor and a cable to take the signal out of this.


Much better sound quality, and noise signal ratio. 


miércoles, 1 de marzo de 2017

Color to sound and Grayscale to sound.

Idea: 

While finishing the first non optical mouth sinths, the ball ones, I came up with another circuit bending //link wiki project.
 

Some systems that I maintain in my work have barcode readers plugged in.
 These have a LED emitter at the tip and a sensor behind that gets the light bounced off a hole. 
The voltage at that light sensor varies with the opacity of the plane on which the pencil stands. 

Passing through a bar code generates a squareish wave at the sensor output that can be decoded as numbers and letters by the digital system.

It can also be heard, as any signal within the audible frequecy range. The voltage varies with the white-to-black passage of the bars or whatever we pass the pencil over. That wave can be expressed by a speaker as sound. 

Anything, a photo, a drawing, a wooden board, monolithic floor, fabric, if it has some kind of texture, or lines, or holes, will produce variations in the sensor as the pen passes over it.
 

Conceptually, this project could be between
Circuit Bending (My project isn't "chance based", as CB is defined, I mostly know what I'm doing)

And

Hardware Hacking https://en.wikipedia.org/wiki/Hacking_of_consumer_electronics



Process:

I then put to work on some broken reading pen of the ones that are to be thrown away at work.

It soon found out it was a very complex task ... it would be very difficult to amplify the picoVolts (0 , 000 001 V) coming out of the sensor without creating huge distortions.
The Emiter/Sensor assembly was metal made, the board was half inside so really hard to get to the sensor and LED cables. 
The pen has a faraday cage around many sensitive elements. 
I destroyed the three non working pens I had available without success accessing the sensor cables and creating a parallel circuit to the one in the pen.



Pixel reading:


Then it occurred to me that "listening to colors" is something that a PC program can do.


I did know some programming, but I was going to learn a lot more in the process...

I know the "eyedropper" tool in Photoshop reads the color code under the mouse pointer and allows us to copy the RGB values ​​of that pixel to be able to put that color in our palette and paint with it.

I told that to Professor Daniel Argente and he said he thought he saw a code segment in "Processing" language that was capable of doing that.

I started researching the language and code already written that could do that: Have an image in memory and get the color value of the desired pixel to generate sound based on that info.
 

I found the code "Pixel array" an educational piece of code that came with the programming editor.

This program displays an image, and a square of the color of the pixel under a cursor that explores the image array in sequence, like reading a book.
That number, the pixel color code of the one being read, is in the form of a variable somewhere in the code.
I took a lot of time to understand the code and find that variable to use. I learned a lot in the process


Sound generation: 


For the  I found the code " Basic " of the Minim library.
It allows to generate a sinusoid sound regulating parameters like frequency and amplitude live.
 It is the first code of the library "Minim", in charge of generating and analyzing audio.
Victor Gil, highschool friend and student of Systems Engineering helped me to put the two codes together in one, so that they were executed at the same time without giving error in the compiler.
It did both things.. a 440hz sound and the pixel scanning without crashing, it was a start!
It's not as simple as putting them together on the same page. At least for me, I have no idea.


The bridge between worlds:

Then we found the color code variable in Pixel Array ( aPixels [int (signal)] ) to be able to process it and use it to set the frequency of the wave generator. 
I did not know what "aPixels [int (signal)]" meant at that time.But by showing that variable onscreen and moving the mouse, it seemed to work properly. Taking negative values ​​around one hundred thousands, as I read it should behave investigating about pixel arrays.

I have to apply a function to this raw pixel values to extract the "hue" or "grayscale" value of that pixel. Then, matematicaly "map" or "scale" that variable to a value in Hertzs, which is the value that the wave generator function accepts. A number between 20 and 20000 that determines the pitch of the sound live.
The color or B/W value goes from 0 to 255 (8 bits). But how it is converted to Hertz is very important and can lead to very different results for the same picture.
The Heartz range and direction of the mapping is arbitrary.
 You can invert the values, or you can take another frequency range and you hear something very different.
I have many ideas on how to refine and make this link as flexible as possible.

For example:
Red = 50 Hz (bass sound)
Violet = 5000 hz (high pitched sound)
And all intermediate points are calculated automatically.

This is the sense that I found to be natural, since the red frequency is the lowest visible (400-484 THz), and the Violet is the highest (668-789 THz).
I do not know if that corresponds to the sensible association of the majority ... What does a Yellow or Green have to sound like?

I never met anybody who experiments synesthesia regularly to know.
I presented the first result to Hector Laborde and he told me about the "Piano of color" that he designed years ago.The idea arose of putting together the two concepts creating a non-figurative image that when explored it would sound a melody.

It occurred to me to make a zigzag arrow with colored stripes along, so that when running with the Mouse the sequence of colors traveled will become a sequence of sound tones.

I modified the frequency range of the oscillator to two octaves, the range needed to play "Happy Birthday" on a one-handed piano.

Then I needed to find out, according to the continuous range given, which color values ​​corresponded to the frequency of which descrete musical note.
So I could create a palette of "color notes", and then painting the stripes with the color of each note in sequence would create the melody as the mouse or finger explored the drawing.

 


This is the pic that I designed to try the "Color to Sound" little program. You can play the "Happy Birthday" song or any song with the color/musical notes at the bottom of the image. C for Do, D for Re, and so on.
https://www.openprocessing.org/sketch/3733
Here it's the on line applet, not working for now, the host page doesn't allow me to update, but it's there since 2009.
I made another version with gray values, and another one with color again but this time using a MIDI Notes Generator instead of a Sine Wave Oscillator.
MIDI notes are discrete (Do or Re, but no intermediate value) and can be made to sound like any instrument, eg Piano, Trumpet, etc ...

Gray scale translated to pure tones with notes palette.
White-> Low pitch
Black-> high pitch

Color translated to Midi Sequencer notes with notes palette.
Warm color-> Low pitch note
Cold color->  high pitch note
Color in pure tone, flowery field image.
Warm color-> Low pitch
Cold color->  high pitch

I want to listen to video ...
I'm looking at how to modify an optical mouse to use it as a camera and to listen to the textures on which we pass the mouse in real time.
I have many technical doubts that lock me, but it is possible.
If any Java Programmer or Computers or Electronics Engineer or  is interested in participating in any way, Welcome!

I try to coordinate and thus unify brain processes:

It's known that in the brain there are well defined and differentiated zones that process image, or sound or touch, etc ... Each zone with a structure designed for that specific purpose. There ara many functional maps of the brain.

Most of this areas comunicate through the center of the brain, but in s
ynesthetes they seem to comunicate between them without passing through the center, so to speak. 

In reality the specificity is in the sensor that sends the information (eye, ear, nose). When arriving at the cerebral cortex already is something abstract, information, that simply requires synthesis and classification.

That is why the crust is almost homogeneous and functionally very adaptable.
When there is slight damage to the cortex, the surrounding neurons adapt and reconnect to fulfill the functions of dead neurons, as they can.

Like the processor of a PC, it does not distinguish whether it is sending an email, processing video or simulating 3d.
 The same nanotransistors serve for everything if they are properly configured.

This project aims to coordinate two brain processes (image and sound) interactively hopefully making them blend, mix. I try to create a correlation between what is heard and what is seen in the broadest sense.

viernes, 17 de febrero de 2017

SoundPaint




Intro:

This project continues the process of this very simple program that I started in 2008:

Http://openprocessing.org/visuals/?visualID=3733
This
 used to be an applet you could use on chrome or ffox but the page changed and now it doesn't

Using the same idea, and code, for ​​listening to the colors under the mouse pointer I added a "Paint Mode" to the program, and many options about the relationship between colors and notes.


Synesthesia:
Is a neurological phenomenon in which stimulation of one sensory or cognitive pathway leads to automatic, involuntary experiences in a second sensory or cognitive pathway. People who report a lifelong history of such experiences are known as synesthetes.


This is how synesthetes are supposed to see the musical notes as colors when listening to music. According to Wikipedia, I don't know where this came from.
I can't make much sense of it, I don't know much about music or synesthesia.

I would like to create a "synesthia mode" that relates that colors to that notes, it's not a linear mapping of color and frequency.
Or being able to create "Custom made" pianos, asociating arbitrarily notes and colors without relationship betwen them all. Just make each color sound like any Hertz value or musical note in each piano key.
It's doable still not implemented (2016).


Process:

I found several Paint codes in Processing.
I chose the one I thought had the simplest code and at the same time implemented buttons to select colors and two extra functions: Save the image as a file and exit the program. 



Simple Paint by Ferhat Sen!

Here you used to be able to try it, broken link now.

I attached my "Color to Sound" to the "Simple Paint" that I found. Like sewing a dog head to a snake tail.
So a "Mode" variable chooses between the two forms of operation (draw or listen) at the beginning of each processing cycle. 
I changed the color mode: from "RGB" by default, to "HSB", this way I have better control of the hue, saturation, etc, parameters without using external functions. 

I enlarged the palette from 4 to 14 colors, to have an octave and its semitones. I labeled each color with the corresponding note.

I linked the amplitude of the sound to the saturation value of the color. To keep silent when scanning the white sheet and only sound is heard as it passes over the strokes of color. But this does have an effect when listening to a photo or any image which colors aren't fully saturated. The colors of SoundPaint palette are fully saturated, no option (for now 2016).
After all that, it looks like this.

Then, I converted the "exit" button to the "BLANK" function which leaves the sheet white again.

I converted the "save image" button to the "save drawing to a memory array and start reading the color under the mouse and sounding the according note" function. Now it's the "Listen to Drawing" button. 


I used the Minim audio library of processing for the sin sinth and audio output.



Any key pressed in the keyboard returns us to "Paint" mode again. 

I still do not get to a relationship between colors and notes that feels natural. What I will do in future versions will be to make that configurable. 
Same as there are pairs of colors that work better together than others, there are pairs of notes more harmonic than others. I would like to find a relationship of colors and notes that works well in both domains.









A problem is that the colors are organized naturally in the chromatic circle, while the notes work rather like a spiral staircase.Although a circle of notes is always going up and down. When organizing the 7 notes in a circle is always a jump between the last and the first note of the scale.


So I made the default Hertz range to cover exactly one central octave instead of whatever.
If the chromatic circle makes a turn, it is well that the "sonic" circle also gives a single turn/octave.
Then I found the colors corresponding to each note and created the notes palette with a small program I wrote appart.

Here I tried to set up two octaves because there are melodies that require more than one octave, but it is more natural like that. Using two or more octaves for one chromatic circle doesn't feels natural.
There is a difference between the frequency in Hz of the colors-notes of maximum 2 Hz with the actual note. It happens that the "hue" is an integer value (from 0 to 255) and to convert Hz the fractions are lost. 


I will try to solve this issue too, the range of the hue variable is modifiable in the colorMode () function.

I Modified the button "Listen"

I created "Strike key to paint again" message in audio mode.


I tried graphic sliders searching one that gave me accuracy of at least 1 / 10,000, in order to select an exact Hz frequency.
I settled for one a bit slow to set up, but accurate. It's not often modified and needs accuracy.





I decided on a "tics" selector that sets the range by each of seven octaves, the other way of setting frequency range was not practical. But it took some research and codding.


I can now read the color of the screen pixels directly, without saving an image file on the hard disk and reading the pixels from there. 
Finally I understood how it all works :) 


Without the need of a file to save and load of, I can upload an applet that works to see online, with OpenProcessing.org having issues I'm now researching how to embed the applet here in the blog.

For this version (somehow numbered 15) you can load any image using the typical file selection dialog that all programs use. 
But I took this posibility out to make it internet frendly in next versions.
And then I didn't put it back because it's already crowded of buttons, sliders, after a while of going forward.


I found some fun in listening to photos. Somehow strange and really unpredictable.






I added slide for brush size. 

New slide to select any color not included in the palette.
 

Midi simple instruments, or the sinusoid sinth from minim.
 

Ability to invert the relationship between colors and sounds. In one mode cold colors mean high pitch, in the other hot colors mean high pitch.
 
There's now a grayscale mode. The pallete is grayscale and it picks up the luminance value of the pixel instead of the hue value to set up the sound sinth.


 


This version, 29 somehow, is the last one up to today, 2017

Works fine as a web applet.


I put the color palette up and draw a subtle design of piano keys. An idea I took from my arts degree professor Mr Laborde.
To give piano "affordance" and help the intuition of the user.
I replaced all the sliders from the Library ControlP5 by home made sliders more adapted to each function.
This is evident in the old Hue's slider, wich is now a homemade "colorpicker". 
By not using ControlP5 I save 100kb that occupy the library.
I went down from 600 to 500kbs weighing the program now, and the interface is more intuitive this way. 

I organized the code in "void functions ()",
Now each routine is in a different tab, now the main code only has calls to those functions, but it does not the code body of each.
Order and progress!

I created "Automatic" mode of listening. 
A "living bug"... called Juan, is crawling on the screen and singing according to the color by which it passes. 
A restless arrow, rather than a bug, if imagination does not rule in your kingdom. 
I learned to make those object oriented bugs in the course of "Artificial life forms applied to the art" by Ing. Emiliano Causa. 

I would like to try the program with "eyetracking" to be one step closer to "listening with the eyes" in the way that this program proposes.

For now that's it.
Similar Programs:

At the beginning of this project, 2008, I started to investigate and found 9 programs that converted image to sound in different ways, some interactive.

Audio Paint
Audio Paint 1.0, released in 2002
Now, 2011, goes the 2.2

Listen to the picture.
Touching a button scans the image from right to left and produces sharp sound if there are strokes in the upper part of the image and bass for what one drew down.
But as I did not find any that did it as my SoundPaint started from scratch, imagining.

Singing Fingers
July 2010, Singing Fingers 1.0
Http://singingfingers.com/
With the slogan: "Finger Paint with Your Voice"

Published by Beginner's Mind, almost a year after I uploaded my "Color to Sound" sketch to Openprocessing.
I wrote them mail to exchange experiences.
Eric told me how they control the color of the stroke by analyzing the predominant musical note of the microphone.
Applies the same concept of converting color to musical notes and vice versa.
Analyzing the microphone input and with the possibility of touching the screen directly.
I love the twist they gave the concept!
With the possibilities offered by an Ipad
The interactivity of Singing Fingers is very good!
Beginner's Mind seems to be a work group at the Massachusetts Institute of Technology.

My approach has been more scientific than ludic.
And less commercial, less Ipod, Ipad, Ibook.
More Openpod, Openpad, Openbook, open source for developers and free application for users.
Go open software / hardware!

Twinkle
Investigating further I found Jay Silver, one of the creators of Singing Fingers.
Another crazy loon who likes to play with 555, computers and sensors.
He created the Twinkle.
Basically a web camera with built-in light that when passing near colored surfaces causes the pc to emit sounds of different notes according to the color of the surface.
Something similar to my initial project that then became Sound Paint and other ramifications.
This thing works similar to old wand barcode readers, but picking up the dominant color under the cam and making them sound, in intervals of time that depend on the computer speed behind the cam to some point.
Here it is in action. 

http://photosounder.com/
Scans the image from left to right

http://www.skytopia.com/software/sonicphoto/
Scans the image from left to right


Paint2Sound
http://bedroomproducersblog.com/2012/04/11/paint2sound-free-image-to-audio-converter-by-flexibeatzii/
Scans the image from left to right


https://www.seeingwithsound.com/im2sound.htm
Scans the image from left to right (Ctrl+C, Ctrl+V, yeah)

https://highc.org/
Scans the image from left to right (Ctrl+C, Ctrl+V, yeah)

http://www.uisoftware.com/MetaSynth/
Scans the image from left to right (Ctrl+C, Ctrl+V, yeah)

http://www.emusician.com/gear/1332/say-it-with-pictures/40102
Here hey list and analize many programs, not mine.

"Audition, FL Studio, MetaSynth, and Poseidon convert the graphics file into a 2-D sonogram display, which shows frequency on the y-axis and time on the x-axis and uses intensity (brightness) to represent amplitude"

HighC uses a hybrid musical-score/piano-roll metaphor and, like the original UPIC system, provides tools for drawing the gestures and shapes that will control musical parameters. Coagula and MetaSynth also provide tools for drawing an image from scratch, while AudioPaint lets you generate a new image automatically with its configurable Lines & Curves and Clouds of Points tools."

I couldn't find any program that does what SoundPaint does, not remotelly.



I found this page that documents the relationship between colors and music throughout history.