PATENTSCOPE will be unavailable a few hours for maintenance reason on Tuesday 19.11.2019 at 4:00 PM CET
Search International and National Patent Collections
Some content of this application is unavailable at the moment.
If this situation persists, please contact us atFeedback&Contact
1. (WO2017136854) MAPPING CHARACTERISTICS OF MUSIC INTO A VISUAL DISPLAY
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

WHAT IS CLAIMED:

1 . A method of presenting a visualization of a piece of music on a display screen as the music is being played, the method comprising:

(a) establishing a mapping system, by

i. selecting a number of audio cues from a set of audio cues, wherein each audio cue represents a distinct acoustic element of the piece of music, and the number of audio cues is optimized with respect to the complexity of the piece of music and the size and the resolution of the display screen, and wherein the audio cues comprise at least one cue selected from: a group of simultaneously played notes (chords), intervals, note sequences and transitional notes; and

ii. assigning a different visual cue to represent each selected audio cue in a manner that provides one-to-one correspondence between each selected audio cue and each visual cue;

(b) extracting the selected audio cues from the piece of music as it is being

played, and converting the extracted audio cues to the corresponding visual cues in the mapping system; and

(c) displaying the visual cues on the display screen as the piece of music is being played, so that one or more persons sees the corresponding visual cues at the same time that they hear the piece of music.

2. The method of claim 1 , wherein (b) comprises sequential analysis of a series of successive overlapping time samples of the piece of music.

3. The method of claim 1 , wherein (a) comprises establishing more than one mapping system, and (b) comprises selecting a preferred mapping system prior to converting the extracted audio cues to the corresponding visual cues.

4. The method of claim 3, wherein the mapping system is selected by a person listening to the piece of music.

5. The method of claim 1 , wherein the mapping system is selected automatically.

6. The method of claim 1 , wherein the selected audio cues further comprise at least one of: melody, harmony, and percussion lines.

7. The method of claim 1 , wherein the selected audio cues further comprise acoustic elements selected from at least one of: amplitude and timbre.

8. The method of claim 1 , wherein the selected audio cues further comprise acoustic elements selected from at least one of: note modifiers, and modifiers of strums and chords: N-instrument, sibilance, attack, where attack can also be a cue modifying strums and chords.

9. The method of claim 1 , wherein at least one of the selected audio cues pertains to overall aspects of the piece of music, including: chord progression, tension, affect, ambience and overall volume.

10. The method of claim 1 , wherein the one-to-one correspondence further comprises:

(i) orthogonal correspondence between any two orthogonally related audio cues and the two corresponding visual cues wherein the two corresponding visual cues are also orthogonally related to each other; and

(ii) ordinal correspondence for an audio cue as applied to any two notes so that the ordinal relationship between the audio cues for the two notes is preserved in the relationship between the two corresponding visual cues for the two notes.

1 1. The method of claim 1 , wherein a time sequence of the selected audio cues is represented as a time-streaming sequence of corresponding visual cues on the display screen.

12. The method of claim 1 , wherein the selected audio cues are selected from one or more of: note, pitch, pitch interval between concurrent notes, pitch interval between sequential notes, amplitude, rhythm, timbre, N-instrument, time of note onset, note duration, amplitude decay of a note during its duration, strum, tremolo, attack of a note, strum or chord, glissando, affect, ambience, sibilance, tension, overall volume, chord progression, vibrato, tremolo, glissando, melody line, harmony line, and percussion line.

13. The method of claim 1 , wherein the number of audio cues to be identified, monitored, and visualized is in the range of 3 to 30.

14. The method of claim 1 , wherein the number of audio cues to be identified, monitored, and visualized is in the range of 5 to 15.

15. The method of claim 1 , further including signal cancellation based on a previously analyzed time segment of music.

16. A system for visualizing a piece of music on a display screen as the music is being played, wherein the system comprises:

(a) a music source;

(b) a display screen;

(c) a memory; and

(d) a processor, wherein the processor is configured to execute instructions stored in the memory, and wherein the instructions comprise instructions for:

establishing a mapping system, by:

i. selecting a number of audio cues from a set of audio cues, wherein each audio cue represents a distinct acoustic element of the piece of music, and the number of audio cues is optimized with respect to the complexity of the piece of music and the size and the resolution of the display screen, and wherein the audio cues comprise at least one cue selected from: a group of simultaneously played notes (chords), intervals, note sequences and transitional notes; and

ii. assigning a different visual cue to represent each selected audio cue in a manner that provides one-to-one correspondence between each selected audio cue and each visual cue;

extracting the selected audio cues from the piece of music as it is being played, and converting the extracted audio cues to the corresponding visual cues in the mapping system; and

displaying the visual cues on the display screen as the piece of music is being played, so that one or more persons sees the corresponding visual cues at the same time that they hear the piece of music.

17. The system of claim 14, wherein the music source supplies the piece of music as a musical data file.

18. The system of claim 14, wherein the music source comprises a stream from a live performance.

19. The method of claim 1 , wherein the piece of music comprises a time-streaming music source and the perceptually conformal mapping system is applied in real-time.

20. The method of claim 17, wherein the streaming music source comprises music from a live performance or music from a recorded music playback device.

21. A computer readable medium encoded with instructions for visualizing a piece of music on a display screen as the music is being played, wherein the instructions comprise instructions for:

establishing a mapping system, by:

iii. selecting a number of audio cues from a set of audio cues, wherein each audio cue represents a distinct acoustic element of the piece of music, and the number of audio cues is optimized with respect to the complexity of the piece of music and the size and the resolution of the display screen, and wherein the audio cues comprise at least one cue selected from: a group of simultaneously played notes (chords), intervals, note sequences and transitional notes; and

iv. assigning a different visual cue to represent each selected audio cue in a manner that provides one-to-one correspondence between each selected audio cue and each visual cue;

extracting the selected audio cues from the piece of music as it is being played, and converting the extracted audio cues to the corresponding visual cues in the mapping system; and

displaying the visual cues on the display screen as the piece of music is being played, so that one or more persons sees the corresponding visual cues at the same time that they hear the piece of music.