Professional Documents
Culture Documents
See in-depth details in my book Mastering openFrameworks Books examples are free, see masteringof.wordpress.com
Interactive Multimedia 1. Interactive multimedia systems. Introduction to openFrameworks 2. Interactive art (lection by Ksenia Fedorova) Graphics 3. Two-dimensional graphics 4. Shaders Sound 5. Interactive sound Computer Vision 6. Introduction to Computer Vision. Grabbing and processing camera images 7. OpenFrameworks and OpenCV Communication with external devices and programs 8. Communicating with other programs via OSC 9. Connecting external sensors using Arduino
Preface
What is interactive multimedia system?
Examples
FunkyForest
Emily Gobeille and Theodore Watson (OpenFrameworks creator) for Festival 2007 CineKid in the Netherlands http://zanyparade.com/v8/projects.php?id=12
Examples
Hand from above
by Chris O'Shea
Examples
Body Paint
by Mehmet Akten
Definition
Interactive multimedia system is hardware and software multimedia system which 1) Is the real-time system. 2) Can input data by using various sensors, cameras and other sources of signals. 3) Can output data by the graphics, sound, haptics, robotics and other devices.
Design
Low-level libraries
(Open Computing Language) Parallelization and speed up the calculations, in particular, means GPU.
Web server
Video 1 Video 2
and so on ...
Middle-level Platforms
This is a platform for "Creative coding", includes a large set of functions and libraries that are integrated for convenient Programming.
Processing
Language: Java For computer vision Java is slow.
openFrameworks
Language: C / C + +
Cinder
Language: C / C + + Recently appeared, gaining popularity.
Video 1
Video 2
Video 3
High-level environments
Visual programming environments, which allows to implement projects without actual programming. It is important that they can be expanded by the plugins made with low-level libraries. Also they can work together with middle-level platforms.
VVVV
Focused on visual effects.
Unity3D
Focused on highquality 3D.
Fields of application
Using only computer vision and computer graphics (and sound) currently produces a wide range - Advertising, - Entertainment, - Training, - Scientific, - Health, - Art interactive systems.
Course Description
What we will do
(1) The main interest
- Creation of interactive multimedia-based systems Recognition of video and audio signals. (2) Moderate interest
Course Content
1. Introduction to OpenFrameworks
- Basics OpenFrameworks. General principles of real-time systems, vision systems and interactive systems. - 2D-graphics. - Receiving and processing images from the camera, the basics OpenCV. - Receiving and processing of sound. - Generation of sound, the playback of audio samples. - 3D-graphics. - Basic mapping. - Meet with high-level programs Unity3D, MultitouchDesigner, QuartzComposer. Installing Openframeworks connection with them. - Connecting external devices via the Arduino.
Course Content
2. Lecture "The Strategy of interactive art." by Ksenia Fedorova
(Curator of Yekaterinburg r la National Centre for Contemporary Art, Ven. factor is the art and culture, Postgraduate Department. aesthetics, ethics, theory and cultural history Philosophical Faculty USU)
Course Content
3. Working on your projects Listeners will be asked to perform under our supervision a number of projects related to video analysis and generation of graphics / sound.
Recommended Reading
OpenFrameworks
Joshua Noble "Programming Interactivity: A designer's Guide to Processing, Arduino, and openFrameworks "
Links
OpenFrameworks homepage www.openframeworks.cc
Introduction in OpenFrameworks
What is OpenFrameworks
OpenFrameworks open library (framework) for C++, designed to work with multimedia.
Developed as a tool for designers and artists working with interactive design and media art.
One of the main advantages OpenFrameworks - extreme ease of use.
It is very popular tool for creating interactive systems, image processing, installations and various projects working with graphics, sound and input / output to external devices.
iPhone OS
History of creation
OpenFrameworks was developed by: Zach Lieberman,Theo Watson, Arturo Castro, Chris O'Shea, together with other members of the Parsons School of Design, MediaLabMadrid, Hangar Center for the Arts, etc. Start of development - in the Parsons School of Design (New York) where Lieberman was a student.
http://www.flong.com/projects/tables/
- Requires a lot of GUI (text editor) instead, use development tools, GUI - QT, Cocoa, ... - Require complex logic control rendering (3d game) instead, use engines like Unity3D, ... - Multimedia capabilities are not needed (web server) -You have money & time & desire for industrial application development so you can create your project from a number of low-level libraries.
Application structure
Architecture design of openFrameworks aimed to handling multimedia information in real time.
This results in a - Application appearance - Application structure.
Application appearance
Normally openFrameworks application has two windows - a graphics window and console window for the logs.
Application structure
Application structure
setup (); //set parameters at startup update (); //computation, analysis of input data draw (); //draw the current state
Homework
On the basis of "pendulum" do "Branching pendulum, with pendulums of different weights. It will have a much more interesting dynamics.
Projects to do
Choose one of the projects for the independent or collective development (or suggest your own idea): 1. 3D Sculpture Creating 2. Flying flowers 3. The dynamic mapping on the cube
Flying flowers
Script Classical interactive installation. where the audience waved his hands in front of the camera, and drawn on the screen, such as flower petals. On the screen in a place where waved the audience - the petals scatter in different directions, there appears a picture. After some time the petals fly back into place. The viewer must actively waving his arms to clear the whole picture. Technology 1) Use optical flow and background analysis for analysis of users' movements 2) Do rendering - on openFrameworks, either on Unity3D or TouchDesigner (transfer data from openFrameworks through OSC).
2. Interactive art
(Lecture by Ksenia Fedorova)
VIDEO ART
Nam June Paik, Budda Watches TV
ELECTRONIC ART
TELE ART
The Telectroscope lets Londoners and New Yorkers see each other in real time.
ROBOTICS
Stelarc
VIRTUAL ENVIRONMENTS
INTERACTIVE INSTALLATION
Reface [Portrait Sequencer], Golan Levin and Zachary Lieberman (an interactive installation which cuts up and recombines users faces)
http://medienkunstnetz.de/works/a-volve/images/1
SAVEYOURSELF!!!
2007 Hideyuki Ando (JP) Tomofumi Yoshida (JP) Junji Watanabe (JP) Think that nothing can make you lose your equilibrium? Then its time for you to try SaveYourSelf!!!
You start by using a digital camera to take a self-portrait and then loading to a compact display floating in a bowl of water. Now, all you have to do is put on a set of headphones with a built-in electrode, pick up the bowl of water, and the action gets underway. The motion of the water is transmitted directly to your body.
The compact display features an integrated acceleration sensor that measures shifts of the water surface and sends the data to the electrode in the headphones. It emits a low-voltage current that stimulates the portion of the inner ear that regulates the sense of balance. A novel sensory interface based on galvanic vestibular stimulation (GVS) was developed for SaveYourSelf!!! Similar procedures are employed in medical tests investigating how well a persons sense of balance functions. Even a very weak electric current (less than ~1.5mA) can disturb the feeling of equilibrium.
()
- - - , . , - . , . . , , . , , (- ). () .
DMITRY KAWARGA, MODEL OF BIPOLAR ACTIVITY Interaction with an object of the Model Bipolar Activity stimulates the feeling of equilibrium and harmony through a simple touch of the metal plates that transfer impulses. www.kawarga.ru
Relax and win! BRAINBALL turns the conventional concept of competition on its head: the victor isnt the contestant whos most active, but rather the one whose brainwaves signal the deeper state of relaxation.
Nowadays, controls that make it easy and convenient to play films are something we take completely for granted. On a DVD player, you can record, fast-forward and reverse, or pause on an individual image. Nevertheless, its only been possible to view these sequences of shots in one predetermined temporal direction. Now, Khronos Projector makes it possible to see a film from a completely new point of view.
Muench-Furukawa_Bubbles
These paradise types are endgames of ideological constructs, whether a vision of a classless society or a scientists vision of a sustainable environment.
Current paradises include, but are not exclusive to: Allahs Garden, American Dream, Communism,
Joachim Sauter, Floating.numbers, 2004 Numbers are commonly seen as quantitative measure of entities. Depending on the context however, they often also have religious, historical, mathematical and philosophical meanings. "floating.numbers" attempts to bring back this often forgotten or unknown layer of meaning into their interpretation in the age of digital determinism.floating.numbers is a 9 x 2 meter interactive table on which a continuos stream of numbers are floating.
Sergey Kotzun
Movement perception
Realization: a video stream from the PC with connected web camera, being transmitted to the projection screen using the multimedia projector. When viewer appears in the working zone of a web camera, he sees himself on a projection screen, surrounded by transparent squares (squares exist only on projection screen). Installed on PC program analyzes movements of a viewer. When viewer and square become in contact on the screen, a sample corresponding to the square, as in musical instrumesnts, is playing and a primitive geometrical element being added to the image on the screen. After the series of contacts the space of the exhibition becomes filled with sounds and the screen is filled with abstract suprematic compositions.
CyberHelmet TRIP, CYLAND: Anna Frantz, Marina Koldobskaya, Pleg Rodionov, Michail Chernov, Olga Rostrosta
Viewer moves through the exhibition hall in a helmet with wireless video glasses. Motion sensors are embedded into helmet. Glasses and sensors are connected with computer by radio signal. Sensors catch velocity of viewers movement and send signals by wireless network to a computer, where they transform into psychedelic visual pattern. Faster moves the viewer (walks, swirls, dances) , stronger the psychedelic TRIP.
(. ): - - - - - - - -
1. Miguel Chevalier Ultra-Nature - 2006 Installation de ralit virtuelle interactive. http://www.miguelchevalier.com/site/pages/autr/41/mosafr.htm 2. Rejane Cantoni/ Leonardo Crescenti INFINITE CUBED 2007 immersive and interactive installation http://www.rejanecantoni.com/infinitoaocubo.html 3. Rejane Cantoni/ Leonardo Crescenti SOLAR 2009 (?) immersive and interactive installation http://www.rejanecantoni.com/infinitoaocubo.html 4. Anne-Sarah Le Meur EYE OCEAN 2009 3D interactive experimental image http://aslemeur.free.fr/projets/agenda_eng.htm 5. Rudolfo Quintas http://www.youtube.com/watch?v=8166KeSZdVA 6. Laboratrio de Luz Modulador de Luz 2.0 (2006) 3.0 (2008) http://www.laboluz.org/base_e.htm 7. Jayoung Bang & Yunjun Lee Memorandum on Vessels 2008 www.Raininganimals.net
8. Bonnie Mitchell and Elainie Lillios Encounter(s) 2007 Audio Visual Interactive Immersive Installation http://immersiveinstallationart.com/encounters/index.html 9. Agnes Hegeds, Bernd Lintermann, Jeffrey Shaw reconFIGURING the CAVE 2000 (in the exhibition Media Museum) 10. Rat tales, 2004 http://www.thegreeneyl.com/rattales 11. GEORGE LEGRADY SensingSpeakingSpace 2000 2002 http://www.virtualart.at/database/general/work/sensingspeak ingspace.html 12. CHRISTA / LAURENT SOMMERER / MIGNONNEAU The Living Web 2002 2002 http://www.virtualart.at/database/general/work/the-livingweb.html 13. Lawrence Malstaf (BE)Courtesy Galerie Fortlaan 17 Gent (BE) Nemo Observatorium, 2009 http://www.fortlaan17.com/eng/artists/malstaf 14. Missa di Vocce 15. Florian Grond Hear and Now, 2007 http://www.grond.at/index.htm?html/projects/hear_and_now /hear_and_now.htm&html/submenues/submenu_projects.ht m 16. Chris Solter http://www.chrissalter.com/projects.php
, , (http://www.videodoc.ncca-kaliningrad.ru/vebliografija/)
Ars Electronica Art Futura Artefact Festival Belluard Bollwerk International Festival Biennale of Electronic Arts Perth Boston Cyberarts Festival Capsula (Science, Art, Nature) Electrohype (Computer Arts) Elektra Digital Art Festival Innovation Lab Institute for the Unstable Media International Festival for Contemporary Media Art International Media Art Biennale International Symposium of Interactive Media Design ISEA (Inter-Society for The Electronic Arts) Japan Media Arts Plaza Los Angeles Center for Digital Art Machine Project MLAC Museo Laboratorio DArte Contemporanea Neruo Show (Caltech Art Show) New Langton Arts Strange Attractors: charm between art and science STRP Art & Technology Festival The Exploratorium The SIGGRAPH Art Shows Transmediale VIDA (International Competition on Art and Artificial Life) Virtual Platform
www.mediaartnet.org www.mediaartlab.ru ( ) www.cyland.ru http://www.amodal.net/precedents.html , (. .) , (..)
3. Two-dimensional graphics
Display Settings
in main.cpp: ofSetupOpenGL(& Window, 1024,768, OF_WINDOW); 1024, 768 - screen sizes, OF_WINDOW - output window. To display full screen in 1280x1024:
Display Settings
Switching between full screen during the program (in the update ()): ofSetFullscreen(bool fullscreen) Example: by pressing the '1 '/ '2' - on / off full screen mode: void testApp:: keyPressed (int key) { if (key == '1 ') { ofSetFullscreen(True); } if (key == '2 ') { ofSetFullscreen(False); } }
ofSetBackgroundAuto(bool bAuto) - Turns on / off cleaning mode images in each frame before calling draw () (default true).
Drawing Shapes
LineofLine(float x1, float y1, float x2, float y2) Rectangle ofRect(float x1, float y1, float w, float h) Circle ofCircle(float x, float y, float radius)
Triangle ofTriangle(float x1, float y1, float x2, float y2, float x3, float y3)
Ellipse ofEllipse Polygon ofBeginShape (), ofVertex (), ofEndShape () Smooth curve ofCurve
Drawing Shapes
Options: Drawing Color ofSetColor(int red, int green, int blue), where the numbers from 0 to 255. ofSetColor(int red, int green, int blue, int alpha) alpha is transparency, see below ofSetColor(int hexColor) 0x00ff00 means green Line Thickness ofSetLineWidth(float lineWidth) line thickness in pixels
Text output
- A simple text output, without setting the font and size: ofDrawBitmapString("Some text", 50, 50), // options: text and coordinates - To derive a normal font and size - to use ofTrueTypeFont: 1) copy bin / data font For example, verdana.ttf (There is a folder openframeworks) 2) declare: ofTrueTypeFont myFont; 3) in the setup (): myFont.loadFont ("verdana.ttf", 32 / * size * /); 4) in the draw (): myFont.drawString ("Good", 50, 50); - Output to a text console window: cout <<"Text" <<endl;
Example
Example
Declaring Variables float px; // top line float py; float qx; // indent float qy; float col; // color setup () ofBackground (255, 255, 255); ofSetBackgroundAuto (false); px = 320; py = 240; qx = 0; qy = 0; col = 0;
Example
update () px + = ofRandom (-1, 1); // ofRandom (a, b) - random value in [a, b] py + = ofRandom (-1, 1); qx + = ofRandom (-0.3, 0.3); qy + = ofRandom (-0.3, 0.3); if (px <0) px + = 640; if (px>= 640) px -= 640; if (py <0) py + = 480; if (py>= 480) py -= 480; if (qx <-30) qx + = 15; if (qx> 30) qx -= 15; if (qy <-30) qy + = 15; if (qy> 30) qy -= 15; col + = 0.02; if (col>= 256) col = col - 256;
Example
draw () int r = col; int g = int (col * 2) % 256; int b = 255 - col; ofSetColor (r, g, b); ofLine (px, py, px + qx, py + qy);
Drawing Images
Collage
Collage (From Fr.collage- Sticking) - technique in visual art, which consists in sticking to a substrate of objects and materials that differ from the basis of color and texture. Collage is also called the product is entirely made in this technique. (Wikipedia) And here we understand the placement of a collage of various images on the screen.
For a collage for you: -Load a pictures - Rotation - Transfer, - Change the size, - Transparency.
http://www.chinesecontemporary.com/hong_hao_5.htm
Rotate image
in the draw ()
ofPushMatrix (); // Remember the transformation matrix ofRotate (10.0); // Rotation in degrees of the left upper hand. angle image.draw (0.0, 0.0); // Draw ofPopMatrix (); // Restore matrix
Transparency
// 2-d option with the exact function for transparency: // GlEnable (GL_BLEND); // GlBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
for (int i = 0; i <5; i + +) { ofPushMatrix (); ofTranslate (300.0 + i * 100.0, 300.0); // Move ofRotate (angle); // Rotation ofScale (1.0 + 0.2 * i, 1.0 + 0.2 * i);// Increase the size of image.draw (-image.width / 2,-image.height / 2); ofPopMatrix (); } ofDisableAlphaBlending (); // Disable transparency // GlDisable (GL_BLEND); }
Result
These files must be added to the project, like so: 1. Copy them into the src of the project 2. in VisualStudio poke right-click on the project in the menu - Add Existing Items and add them both. Correct copy all the addons folder openframeworks / addons, but it can be uncomfortable if the project should be mobile, that is collected on different computers.
in the setup () // Create a buffer buffer.allocate (ofGetWidth (), ofGetHeight (), false // no autoclean every drawing - since will be there to collect pictures );
buffer.end (); // end drawing to clipboard buffer.draw (0, 0) // output buffer to the screen // Else draw ... In this case, the procedure of drawing into the buffer will be from frame to frame (the way: that was a pendulum), and procedures for the rest of the painting - only appear in one frame (itself a pendulum with a rubber band).
Homework (*)
Draw a polygon filled with a (textured) with some images. Hint. Scheme of the function call: ofTexture tex = image.getTextureReference (); tex.bind (); glBegin (GL_QUADS); glTexCoord2f (...) glVertex2f (...) ... glEnd (); tex.unbind ();
For large scale exciting field capture rate may be very low. Do not forget to shoot your project set the Release, not Debug.
Better to use the codec CamStudio Lossless codec, it is fast and does not spoil the image. But the files are large in size. Therefore, before publication, it is better to convert the file using VirtualDub to another codec, for example, XVID.
4. Shaders
http://pixelnoizz.files.wordpress.com/2009/08/picture-54.png?w=510&h=259
What is a shader
1.Shaders now commonly referred to small programs that graphics card is used to change the geometry of objects and the pixels of images during rendering.
What is a shader
Shaders are small programs written in GLSL. They can be stored in text files. When starting your application, they will be compiled and stored in video memory.
This is convenient because it can change and customize the shaders without having to recompile the application itself.
Required components:
1. Addon ofxFBOTextureDrawing in the buffer: ofxFBOTexture.h, ofxFBOTexture.cpp (see "Drawing in buffer" in a lecture at a two-dimensional graphics). 2. Addon ofxShader loading/unloading of shaders: ofxShader.h, ofxShader.cpp You can download them, for example, http://playbat-common.googlecode.com/svn/trunk/of/
Example 1. Smoothing
http://www.youtube.com/watch?v=Nkr4JiU0sF0
Smoothing here is implemented by combining shifted images with different weights. The radius of the shift will be determined by X-coordinate of the mouse. (The idea was taken from the example shaderBlurExample http://forum.openframeworks.cc/index.php?topic=2729.0)
Text shader
Create a file fragment shader blur.frag in folder bin/data:
# Extension GL_ARB_texture_rectangle: enable //Setup uniform sampler2DRect src_tex_unit0; //External parameter uniform float blurAmount; //External parameter - the radius of the smoothing void main (void) // This function will be applied to each pixel { vec2 st = gl_TexCoord [0]. st; // st - vector of input pixel coordinates vec4 color; // accumulator for (float i = -4.0; i <= 4.0; i + +) { float weight = 5.0 - abs (i); color + = weight * texture2DRect (src_tex_unit0, st + vec2 (blurAmount * i, 0)); // Get pixel color from the texture src_tex_unit0, //Coordinates x = st [0] + blurAmount * i, //Y = st [1] } color /= 25.0; gl_FragColor = color; // Set the output pixel color }
! Be careful: Compiler do not automatically converts float <-> int, writes warning and the shader does not run.
Text shader
Create a file vertex shader blur.vert in folder bin/data:
void main () { gl_TexCoord [0] = gl_MultiTexCoord0; gl_Position = ftransform (); } This vertex shader, which does not change anything. Just ofxShader for work needed and the file.
void testApp::setup () { ofBackground (255,255,255); grabber.initGrabber (640, 480); // start the camera buffer.allocate (640, 480, true); //true - auto clear background at every step //Load the shaders from files blur.frag and blur.vert. //At startup, if there are errors in the text of the shader, they will be on display, //With the row number with an error. shader.loadShader ("blur"); } void testApp::update () { grabber.update (); // update the pictures with the camera }
// Draw what you want on the screen, "passing" it through shader ofSetColor (255, 255, 255); buffer.draw (0, 0); // Disable shader shader.setShaderActive (false); }
Example 2. Magnifier
http://www.youtube.com/watch?v=H-mYdfaku90
Magnifying glass effect appears at the mouse pointer. (The idea was taken from the example shaderZoomExample http://forum.openframeworks.cc/index.php?topic=2729.0)
Text shader
Create a file fragment shader zoom.frag folder bin/data:
# Extension GL_ARB_texture_rectangle: enable uniform sampler2DRect src_tex_unit0; uniform vec2 circlePos; // Position loop uniform float circleRadius; // Radius of the loop uniform float zoom; // Magnification factor in the loop
void main (void) { vec2 st = gl_TexCoord [0]. st; float relX = st.s - circlePos.x; float relY = st.t - circlePos.y; float dist = sqrt (relX * relX + relY * relY); if (dist <= circleRadius & & dist> 0.0) { // If the pixel in the loop and not the center (as divided by dist) float newRad = dist * (zoom * dist/circleRadius); float newX = circlePos.x + relX/dist * newRad; float newY = circlePos.y + relY/dist * newRad; gl_FragColor = texture2DRect (src_tex_unit0, vec2 (newX, newY)); } else { gl_FragColor = texture2DRect (src_tex_unit0, st); } } In addition, create a file vertex shader zoom.vert by copying blur.vert
in the draw () //Set the parameters of the shader shader.setUniformVariable2f ("circlePos", mouseX, 480 - mouseY); shader.setUniformVariable1f ("circleRadius", 120); shader.setUniformVariable1f ("zoom", 1.5);
Color operations
We learned how to combine colors and pixels to produce a geometric transformation of the image. How to change the color values: Color - a type of vec4, is a vector of 4 components R, G, B, Alpha, which take values from 0 to 1. vec4 color = vec4 (1.0, 0.0, 0, 1.0); //red color
color [0],color [1],color [2],color [3] - R, G, B, Alpha - components. Consider another example.
The modified loop from the previous example in which the flowers inside the loop swapped R, G, B components.
Text shader
Filezoom.fragfolderbin/data: replace string gl_FragColor = texture2DRect (src_tex_unit0, vec2 (newX, newY)); on vec4 color = texture2DRect (src_tex_unit0, vec2 (newX, newY)); gl_FragColor = vec4 (color [1], color [2], color [0], color [3]);
Homework
1. Vortex Make the loop tightened inside the image. Hint: rotate a vector, depending on the value of dist/circleRadius. 2. Water 1 Shader language have sin, cos and atan functions. Make wavy distortion of the image on the screen from the center of the screen. As if dropped something in the water in the middle of the screen. Ie this should be a video, not static image. 3. Water 2 With the help of mouse clicks to simulate drop something in the water in the picture. Realize this, for example, by simulating the texture oscillation amplitude waves.
5. Interactive sound
http://blog.modernmechanix.com/mags/qf/c/PopularScience/9-1950/med_sound.jpg
Amplitude
Time
http://upload.wikimedia.org/wikipedia/commons/thumb/9/9a/Digital.signal.svg/567px-Digital.signal.svg.png
Sampling frequency
8,000 Hz - telephone, enough for speech. 11,025 Hz - games, samples for electronic music. 22 050 Hz - the same as 11,025 Hz.
Capacity
Length - the number of bits used to represent the signal samples in the quantization (in our case - the quantization of the amplitude).
2. You can change the images based on the values of neighboring pixels (for example, the operation of smoothing).
For audio in PCM format for both possibilities not applicable
3. mi 2 nd octave 659.26 Hz
1.
+ 3.
The last two sound sound the same. And their function amplitude - much different. Thus, chelovecheckoe ear perceives the sound spectrum, ie the composition of its frequency, not amplitude representation of sound.
6. S & S - Sample & Synthesis - Sampling, analysis and subsequent synthesis today one of the best technologies play "live" instruments.
Sampling
Recording: "Live Sound" - a microphone - ADC - PCM-format. Playback: PCM-format - DAC - speaker.
Additional options: you can change the playback speed, then increase the tone and speed of the sample. Modern algorithms also enable you to change the tone of the sample without changing its speed, and vice versa.
http://josephdbferguson.com/uploads/akai_mpc1000.jpg
(Subtractive) Synthesis
In precomputer time: a few simple waves (rectangular, sinusoidal, triangular) processed a set of filters (bass, treble, cut the desired frequency). The resultant was going to the speakers. Now: done digitally. There are difficulties - should carefully consider the problems associated with the digital representation of sound ("aliasing").
Synthesizer Minimoog
http://www.jarrography.free.fr/synths/images/moog_minimoog.jpg
// Declare variables ofSoundPlayer sample;// Sample player ofPoint p; // point and the radius - to draw a circle float rad; void testApp:: setup () { sample.loadSound ("sound.wav"); // Load sample from the folder bin / data sample.setVolume (0.5f);// Volume [0, 1] sample.setMultiPlay (true);// Allow you to run multiple samples ofSetFrameRate (60) // speed drawing frame ofSetBackgroundAuto (false); // turn off background erase ofBackground (255,255,255); }
Additive synthesis
Additive synthesis based on the construction of sound by summing a set of harmonics (ie, sine waves of different frequencies) with variable volume.
Any sound can be represented with arbitrary accuracy as the sum of a large number of harmonics with varying volume. But in practice, work with a large number of harmonics requires large computational resources. Although, at present, there are several hardware and software additive synthesizers.
Harmonics are played with looped samples, which simply changes the volume.
Synth Code 1 / 4
// Declare variables // Video-grabber for "capture" the video frames ofVideoGrabber grabber; int w;// Width of the frame int h;// Height of the frame const int n = 20;// Number of harmonics ofSoundPlayer sample [n];// Samples of harmonics float volume [n]; // Volume of harmonics int N [n];// Number of pixels in the play harmonica ofSoundPlayer sampleLoop; // Sample a drum loop
Synth Code 2 / 4
// Initialize void testApp:: setup () { w = 320; h = 240; grabber.initGrabber (w, h); // Connect the camera // Grab samples harmonics for (int i = 0; i <n; i + +) { int freq = (i +1) * 100; sample [i]. loadSound (ofToString (freq) + ". wav"); // Files are called 100.wav, ... sample [i]. setVolume (0.0);// Volume sample [i]. setLoop (true);// Looping Sound sample [i]. play ();// Start sound }
Synth Code 3 / 4
// Update the state void testApp:: update () { grabber.grabFrame (); // grab a frame if (grabber.isFrameNew ()) {// if you come to a new frame for (int i = 0; i <n; i + +) {volume [i] = 0; N [i] = 0;} // Reset the harmonic unsigned char * input = grabber.getPixels (); // pixels of the input image for (int y = 0; y <h; y + +) { for (int x = 0; x <w; x + +) { // Input pixel (x, y): int r = input [3 * (x + w * y) + 0]; int g = input [3 * (x + w * y) + 1]; int b = input [3 * (x + w * y) + 2]; int result = (r + g + b> 400)? 0: 1;// Threshold int i = (x * n / w);// In which to write the result of harmonic volume [i] + = result; N [i] + +; }} // Set the new volume of harmonics for (int i = 0; i <n; i + +) { if (N [i]> 0) {volume [i] / = N [i];} // Normalize the volume [0, 1] sample [i]. setVolume (volume [i] / n); // Volume. // Divide by n, otherwise it will be distortion of the output sound }} OfSoundUpdate ();// Update the status of the sound system }
Synth Code 4 / 4
// Draw void testApp:: draw () { ofBackground (255,255,255); // Set the background color float w = ofGetWidth ();// Screen height and width float h = ofGetHeight (); ofSetColor (255, 255, 255); // Else draw a picture frame is incorrect grabber.draw (0, 0, w, h);// Output frame
// Draw the harmonics ofEnableAlphaBlending (); // Enable transparency ofSetColor (0, 0, 255, 80);// Blue color with opacity of 80 for (int i = 0; i <n; i + +) { float harmH = volume [i] * h;// Height of the bar harmonics i ofRect (i * w / n, h - harmH, w / n, harmH); } ofDisableAlphaBlending ();// Disable transparency }
http://www.youtube.com/watch?v=y70Oxk1RAOM
Introduction
Sound synthesis on openFrameworks performed at the lowest level, "byte".
Therefore it is suitable esperimentalnyh projects with sound. In complex projects, it is more convenient to use a special type of library SndObj (See enlargement oxfSndObj) or some other program like PureData or Max / MSP, which binds openFrameworks protocol OSC.
Program Structure
For sound synthesis conventional structure of the program improves audioRequested (). It is called the sound driver, when you need to fill in another piece of sound buffer sound.
Program Structure
in testApp.h, class testApp add: void audioRequested (float * input, int bufferSize, int nChannels); in the setup () to add: ofSoundStreamSetup (2,0, this, 22050, 256, 4); / 2 output channels, // 0 input channels, // 22050 - sampling rate, samples per second // 256 - buffer size // 4 - how to use buffers. Affects the delay. // Buffer size and number of buffers - set the balance between delay and the resulting sound, "Glitter," which occurs when the computer is not fast enough.
Program Structure
in testApp.cpp add: void testApp::audioRequested (Float * output, // output buffer int bufferSize, // buffer size int nChannels // number of channels ) { // Example of sound "white noise" to two channels for (int i = 0; i <bufferSize; i + +) { output [i * nChannels] = ofRandomf (); // [-1,1] output [i * nChannels + 1] = ofRandomf (); } }
Example
See an example audioOutputExample in OpenFrameworks.
Mouse moves 1. up and down - changing the tone of the sound. 2. left and right - changing panorama. Mouse click - generated noise.
http://www.youtube.com/watch?v=Pz6PO4H1LT0
Homework
Using the example audioOutputExample, add to the example of the swinging pendulum of sound generation. Namely: Let the position of the pendulum in the Y sets the pitch, and the position of the pendulum on X - panning.
Definition
(From Wikipedia)
Computer vision - Theory and technology of creating machines that can see.
Definition
The topics include computer vision
- Play action
- Detection of events
- Tracking, - Pattern recognition, - Restoration of images.
Image examples
Ordinary light, radio waves, ultrasound - they are all sources of images:
1. Color images of the visible spectrum 2. Infrared images 3. Ultrasound images 4. Radar images 5. Depth images
Image examples
1. Color images of the visible spectrum
http: //rkc.oblcit.ru/system/files/images/% D0% 9F% D1% 80% D0% B8% D1% 80% D0% BE% D0% B4% D0% B013.preview.jpg http: //imaging.geocomm.com/gallery/san_francisco_IOD032102.jpg
Image examples
2. Infrared images
Image examples
3. Ultrasound images Image with side-scan sonar:
http: //ess.ru/publications/2_2003/sedov/ris6.jpg
Image examples
4. Radar images Snapshot of the radar:
http: //cdn.wn.com/pd/b1/3a/abd9ebc81d9a3be0ba7c4a3dfc28_grande.jpg
Image examples
5. Images with depth
Video
http: //www.youtube.com/watch?v=pk_cQVjqFZ4
But the two-dimensional arrays of data are used not only in computer vision:
http: //www.tyvek.ru/construction/images/structure.jpg
Key Features
For various processing tasks in real-time need different cameras. Their main features are: 1. Resolution 2. The number of frames per second
Resolution
This is the image size in pixels, obtained from the camera.
320 x 240
640 x 480
1280 x 1024
accuracy accuracy accuracy when observing an object when observing an object when observing an object the size of 1m: the size of 1m: the size of 1m: 1.56 mm 0.97 mm 3.13 mm size of 30 frames: 6.6 MB size of 30 frames: size of 30 frames: 26.4 MB 112.5 MB
http: //www.mtlru.com/images/klubnik1.jpg
30 fps
Time between frames: 33 ms
60 fps
Time between frames: 16 mS
150 fps
Time between frames: 6 ms Can used for musical instrument
http: //www.youtube.com/watch?v=7iEvQIvbn8o
Infrared image
Using invisible infrared illumination, this camera will see in a dark room (On performance)
Analog
Historically appeared first, signal is transmitted to analog signals (TV format). (+) Transmit data over long distances, albeit with interference (100 m) (+) Easy to install, small size (-) For signal input into the computer requires a special card or TV tuner ", they usually consume a lot of computing resources. (-) "Interlace"Or Interlace - very difficult to analyze the image, if there is movement. (Actually attending 2 half frame, each 50 times/sec)
Webcams (USB-camera)
Appeared in ~ 2000., transmit data via the USB-protocol uncompressed or compressed in JPEG. (+) Easy to connect computer and software (+) Cheap, available for sale (-) Overhead - to decode JPEG requires computing resources. (-) The cheapest models are usually bad optics and the matrix (Makes noise in the image) (-) Because of limitations of USB bandwidth can not connect more than 2 cameras to a single USB-hub, but usually on the PC 2-3 USB hub.
Firewire-camera (IEEE-1394)
Cameras that transmit a signal protocol FireWire, pylevlagozaschitnom usually the case, usually it is the camera for industrial applications. (+) Transfer of uncompressed video in excellent quality at high speed (+) You can connect multiple cameras (+) Tend to have excellent optics (-) High price (-) Requires power, which is sometimes difficult to connect to laptops
Network (IP-camera)
Cameras that transmit data on network (wired or wireless) channel. Is now rapidly gaining popularity in all areas. (+) Easy connection to PC (+) Ease of installation (+) The possibility of transferring data to an unlimited distance, which allows you to construct a network of cameras covering the building or area, attached to the airship, etc. (+) Control - to rotate the camera, adjust the increase (-) May have problems with speed of response (-) Is still relatively high price (-) While not portable (2011)
Constructed from ordinary cameras by adding an infrared filter and, often, an infrared illuminator. + IR-rays are almost invisible man (in the dark can be seen as a faint red color), so often used to simplify the analysis of objects in the field of view. - Specialized infrared camera suitable for machine vision, not a mass product, so they usually need to be ordered.
Price: $ 50.
USB, CCD
(Depth - stereo vision using laser infrared illuminator, why not work in sunlight) USB, CMOS
Threshold
For example, a pixel (x,y) RGB-image: image [3 * (x + w * y) + 0]- Red image [3 * (x + w * y) + 1] - Green image [3 * (x + w * y) + 2]- Blue
Threshold
Threshold allows you to find the pixels brightness, ie, (0.2989 * Red + 0.5870 * Green + 0.1140 * Blue) or one color component (Red, Green, Blue) which greater than some threshold value. What you need to conduct treatments: - Access to the pixels a frame for analysis, - To analyze pixels and display the result on the screen.
Threshold
++++++++ Add to "Declare the variables:
//Process bytes of image unsigned char * outImage; //Texture to display the processed image ofTexture outImageTexture; ++++++++ Add to the "setup ()": //Allocate memory for image analysis outImage = new unsigned char [w * h * 3]; //Create a texture to display the result on the screen outImageTexture.allocate (w, h, GL_RGB);
Threshold
//Update the state void testApp:: update () { grabber.grabFrame (); //grab a frame if (grabber.isFrameNew ()) { //If it came a new frame //Pixels of the input image: unsigned char * input = grabber.getPixels (); //Looping through them for (int y = 0; y <h; y + +) { for (int x = 0; x <w; x + +) { //Input pixel (x, y): int r = input [3 * (x + w * y) + 0]; int g = input [3 * (x + w * y) + 1]; int b = input [3 * (x + w * y) + 2]; //Threshold via Blue int result = (b> 100)? 255: 0; //Write output image will be black or white: outImage [3 * (x + w * y) + 0] = result; outImage [3 * (x + w * y) + 1] = result; outImage [3 * (x + w * y) + 2] = result; }} //Write to a texture for the subsequent withdrawal of its on-screen outImageTexture.loadData (outImage, w, h, GL_RGB); }
Threshold
//Draw void testApp:: draw () { grabber.draw (0, 0) //output frame outImageTexture.draw (w, 0, w, h); //Output the processing result }
grabber.draw (0, 0) //output frame outImageTexture.draw (w, 0, w, h); //output processing result
//Display circle around the object ofSetColor (0, 255, 0); //Green ofNoFill (); //Turn off the fill ofCircle (pos.x, pos.y, 20); //Draw a circle on the ref. frame ofCircle (pos.x + w, pos.y, 20); //Draw a circle on the Rec. frame }
These coordinates can be used to control something. By the way, "n" can be used to control that shot is of interest to us an object.
Homework: "Instinct2"
Implement the next interactive project:
1. Take the draft finding color labels, and build an image intens, consisting of pixels characterizing the intensity of the blue, without the threshold processing: int result = b - (r + g)/2; //Variants: b - min (r, g), b - max (r, g) result = max (result, 0); //Result must be in [0 .. 255] intens [3 * (x + w * y) + 0] = result; intens [3 * (x + w * y) + 1] = result; intens [3 * (x + w * y) + 2] = result; Bring out the image on the screen.
Homework: "Instinct2"
2. Position on the screen 20-50 colored "creatures", the initial position and color of which - given randomly. They have mass and velocity. Let be a variety of colors and sizes. Let the size pulsates. These sprites can be drawn in Photoshop translucent brushes of different colors:
Homework: "Instinct2"
3. Suppose that a move in the direction of maximum intensity of the blue. To this end, the establishment of the coordinates (x0, y0) is the center of mass intens in some of its neighborhood: float mx = 0; float my = 0; float sum = 0; int rad = 100; //Radius of the neighborhood, you may want to do is to depend on //Current size of the establishment for (int y =- rad; y <= rad; y + +) { for (int x =-rad; x <= rad; x + +) { if (x + x0> = 0 & & x + x0 <w & & y + y0> = 0 & & y + y0 <h //Of the screen & & X * x + y * y <= rad * rad //Inside a circle of radius rad ) {Float value = intens [3 * (x + w * y) + 0]; mx + = value * x; my + = value * y; sum + = value; }}} if (sum> 0) {mx/= sum; my/= sum;} Then mx, my - Coordinate, where necessary to direct the bacteria. Thus, to apply the 2 nd law of Newton, specifying the desired acceleration, the bacteria moved in the right direction.
How to make the speed of physics programs are not zasisela of computer power
To speed simulation of physics in the program did not depend on the power of the computer, use the timer: //Declare variables float time0 = 0, //the last entry in the update () in the update (): float time = ofGetElapsedTimef (); //Time from the start of the program in seconds float dt = min (time - time0, 0.1); float time0 = time; //use dt value in your physics! //We take a min, because if for some reason the update is delayed //(For example, the user moves the window on the screen) //We zaschischitimsya from being able to dt did not become a great //(Large dt can "blow up items").
Note
Tasks like threshold processing, noise removal, object detection, contour tracking and other - easier to solve with ready-made procedures implemented in the OpenCV library, connected to OpenFrameworks.
// OF_IMAGE_COLOR - 3-byte color. // OF_IMAGE_GRAYSCALE - 1-byte gray. // OF_IMAGE_COLOR_ALPHA - 4-byte, color + transparency.
Introduction to OpenCV
- What is OpenCV - The first project to OpenCV - Class Mat - Image processing functions
What is OpenCV
"Open Computer Vision Library" Open library with a set of functions for processing, analysis and image recognition, C / C + +.
What is OpenCV
2000 - First alpha version, support for Intel, C-interface 2006 - Version 1.0 2008 - Support Willow Garage (lab. Robotics) 2009 - version 2.0, classes in C + + 2010 - version 2.2, realized work with the GPU
2. Create a console project File - New - Project - Win32 Console Application, in the Name enter Project1, click OK.
3. Set up the path Alt + F7 - opens the project properties Configuration Properties - C / C + + - General - Additional Include Directories, where we put the value "C: \ Program Files \ OpenCV2.1 \ include \ opencv"; Linker - General - Additional Library Directories, where we put the value of C: \ Program Files \ OpenCV2.1 \ lib \
Linker - Input - Additional Dependencies cv210.lib cvaux210.lib cxcore210.lib cxts210.lib highgui210.lib for Release, cv210d.lib cvaux210d.lib cxcore210d.lib cxts210.lib highgui210d.lib for Debug
The first project to OpenCV 2. Reading the image and display it on screen
1. Preparing the input data: file http://www.fitseniors.org/wp-content/uploads/2008/04/green_apple.jpg write in C: \ green_apple.jpg 2. Writing in Project1.cpp: # Include "stdafx.h" # Include "cv.h" # Include "highgui.h" using namespace cv; int main (int argc, const char ** argv) { Mat image = imread ("C:\\green_apple.jpg");// Load image from disk imshow ("image", image); // Show image waitKey (0); // Wait for keystroke return 0; } 3. Press F7 - compilation, F5 - run. The program will show the image in the window and by pressing any key will complete its work.
Mat Class
Mat - Base class for storing images OpenCV.
// Show the channels in separate windows // Note that the red channel - 2, not 0.
imshow ("Red", channels [2]); imshow ("Green", channels [1]); imshow ("Blue", channels [0]); waitKey (0); return 0; }
Original image
http://www.innocentenglish.com/funny-pics/best-pics/stairs-sidewalk-art.jpg
http://www.svi.nl/wikiimg/SeedAndThreshold_02.png
FunctionfloodFillprovides a fill area, starting from a pixel (x, y), with specified boundaries shutdown using a 4 - or 8 - adjacency pixels. It is important: It spoils the original image - as it fills. Most often it is used to highlight areas identified by the threshold processing, for subsequent analysis.
http://upload.wikimedia.org/wikipedia/commons/thumb/5/5e/Square_4_connectivity.svg/300px-Square_4_connectivity.svg.png http://tunginobi.spheredev.org/images/flood_fill_ss_01.png
The contour of the object - this is the line representing the edge of the object's shape. Underline the contour points -Sobel, Leased-line circuit -Canny. Application 1. Recognition. Along the contour can often determine the type of object that we observe. 2. Dimension. With the circuit can accurately estimate the size of the object of their rotation, and location.
http://howto.nicubunu.ro/gears/gears_16.png http://cvpr.uni-muenster.de/research/rack/index.html
Problem
The image of billiard field to find the coordinates of the centers of billiard balls. Algorithm:
1. Threshold find bright pixels. 2. Analysis of areas. The method of casting we find connected regions, Among them we find such dimensions that allow it balls.
1. Threshold
Problem - the image of billiard field highlight the pixels that are not field (shooting conditions such that the field - dark) Mat image = imread ("C:\\billiard.png"); // load the input image imshow ("Input image", image); vector <Mat> planes; split (image, planes); Mat gray = 0.299 * planes [2] + 0.587 * planes [1] + 0.114 * planes [0]; double thresh = 50.0; // The threshold is chosen empirically threshold(Gray, gray, 50.0, 255.0, CV_THRESH_BINARY); imshow ("Threshold", gray);
Please note: We have identified just pixels "not" field. To find the coordinates of the centers of balls and cue position - requires further processing.
2. Analysis areas
floodFill- Allocation of connected regions
floodFill - description
Function floodFillprovides a fill area, starting from a pixel (x, y), with specified boundaries shutdown using a 4 - or 8 - adjacency pixels.
2. Analysis areas
Problem - the image of billiard glades find billiard balls - ie. compute their centers and sizes. The idea - using the example of the result threshold, through all connected regions with floodFill, and the found areas to consider those balls whose sizes lie in the predefined boundaries. const int minRectDim = 25; // Max and min size of the balls const int maxRectDim = 35; // Iterate over the image pixels for (int y = 0; y <gray.rows; y + +) { for (int x = 0; x <gray.cols; x + +) { int value = gray.at <uchar> (y, x); if (value == 255) {// If the value of - 255, fill it 200 Rect rect;// Here is written Bounding Box int count = floodFill(Gray, Point (x, y), Scalar (200), & rect);
Analysis areas
// Check size if (rect.width> = minRectDim && rect.width <= maxRectDim && Rect.height> = minRectDim && rect.height <= maxRectDim) { // Center int x = rect.x + rect.width / 2; int y = rect.y + rect.height / 2; // Radius int rad = (rect.width + rect.height) / 4; // Draw a circle the thickness of 2 pixels circle (image, Point (x, y), rad, Scalar (255, 0, 255), 2); } } } } imshow ("out", image);
Comments
In this example, we considered the simplest method for finding the ball in the picture - by analyzing the sizes of bounding boxes. Such an analysis works on the assumption that the image no other sites with similar bounding boxes. For a real application, a more detailed analysis of areas. This is primarily due to the fact that if the balls are near each other, then they can "stick together" in one connected region. Possible approaches to solving this problem: 1. To fill the interior area, select the path obtained by field and assess its areas of convexity and concavity for the selection of balls. 2. Use template "round", which is applied to the obtained area and look for the best of its location.
Debugging in OpenCV
To debug the project, which is being processed by OpenCV, very useful to display intermediate images by usingimshow imshow ("image", image); - It displays the image in the image window with the heading "image". Warning: 1. need # Include "highgui.h" 2. if you display images in the window with the same name, then only the last image will be visible.
Homework
Do a project on openFrameworks, which 1) receive a picture from the camera 2) then this picture is transmitted in OpenCV, where she - Smoothed - Is the threshold processing 3) The picture with the camera and the resulting image is displayed on the screen using openFrameworks.
8. Communicating
with other programs via OSC
http://profile.ak.fbcdn.net/hprofile-ak-snc4/ 23268_357220159002_4989_n.jpg
OSC protocol
Protocol OSC - Network protocol based on UDP. "Open Sound Control" + Low latency in transmission, because UDP (not TCP / IP) - Packets can be lost, so the data is better to send a certain frequency, in small portions.
OSC protocol
Other ways
In addition to OSC, you can use TCP-sockets, for communicating with Flash (via XMLSockets). In openFrameworks addon: ofxNetwork Example: networkTcpServerExample, networkTcpClientExample
http://dkds.ciid.dk/wp-content/uploads/2009/02/2970549282_a49c822cd4.jpg
Types of sensors
Consider just a few
http://www.itsahit.com/rikard/Restorations/images/JP6/uncleanpcbpotsandall.jpg
Problem - if the object is black, it can not work because they rely on that object reflects light. Indicator of the distance depends on the color of the object.
Passive
Ispolzutsya in security systems. Based on measurement (thermal) infrared radiation from objects.
Compact, inexpensive, used by amateur enthusiasts, in particular, for experimental robotics. Challenges to the use of interactive systems - not accurate, and will not work behind the glass.
Principle of operation - sends an ultrasonic signal and measures the time after which comes the reflected signal.
General description
Arduino - A hardware computing platform whose main components are a simple I / O board and development environment for language Processing / Wiring. Arduino can work as stand-alone microprocessor board. Can also connect via USB to your PC and integrate with the software running on your computer. For example, Flash, Processing, Max / MSP, Pure Data, SuperCollider, OpenFrameworks. Equipment Arduino board consists of a microcontroller Atmel AVR, as well as binding elements for programming and integration with other schemes. Programming Arduino to create an integrated development environment for Java, which includes a code editor, compiler, and transmission module firmware to the board. Runs immediately without installation.
The result was protection of the name of the branch board version Arduino Diecimila, made by a group of users, which led to the production of an equivalent payment, entitled Freeduino. Name Freeduino is not a trademark, and may be used for any purpose.
Freeduino 2009 - Full analogue Arduino Duemilanove.
Freeduino 2009
Microcontroller: ATmega168 (ATmega328) Digital I / O ports: 14 ports (6 of them with a PWM signal) Analog input ports 6 ports EEPROM (Flash Memory): 16 K (32 K), 2 of them to use the loader RAM (SRAM): 1 KB (2 KB) ROM (EEPROM): 512 bytes (1024 bytes) Clock Speed: 16MHz PC Interface: USB Power from USB, or from an external source, the choice of automatic
http://www.freeduino.ru
/ / Print the results to the serial monitor: Serial.print ("sensor ="); Serial.println (sensorValue); / / Wait 10 milliseconds before the next loop / / For the analog-to-digital converter to settle / / After the last reading: delay (10); }
Firmata protocol
We have considered an example of how to use the class ofSerial can work with serial port. This allows you to communicate with the Arduino. This is usually done simultaneously - the Arduino send a control signal (eg, random 1 byte), and in return expect a packet of data.
Inconvenience: For each new configuration of sensors to be reprogrammed and debugged Arduino and process data in OpenFrameworks. Solution: Use protocol Firmata designed specifically to simplify data input / output. And on the side of the Arduino program, it does not change all the settings are made via OpenFrameworks.
Firmata protocol
Firmata is a generic protocol for communicating with microcontrollers from software on a host computer. It is intended to work with any host computer software package. Right now there is a matching object in a number of languages. It is easy to add objects for other software to use this protocol. Basically, this firmware establishes a protocol for talking to the Arduino from the host software. The aim is to allow people to completely control the Arduino from software on the host computer. http://firmata.org Class ofArduino in OpenFrameworks uses Firmata, and has simple commands to connect to the Arduino and input-output data port with Arduino, for example: int getAnalog (int pin)- Read data from the analog input pin number int getDigital (int pin)- Read data from digital input pin number To use it, should be put in the work program on the Arduino Firmata, see below.
How do I know the port - at the start of the program writes in the debug window a list of available ports. If the list of ports on the screen is not taken out, then when the device can see it in Device Manager.