Some content of this application is unavailable at the moment.
If this situation persist, please contact us atFeedback&Contact
1. (WO2019000054) SYSTEMS, METHODS AND APPLICATIONS FOR MODULATING AUDIBLE PERFORMANCES
Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

CLAIMS

1. A software application for harmonising one or more geographically or temporally distributed renditions with at least one backing clip comprising:

a calibration module for selecting a parameter defining one or more aural or visual characteristics of a first rendition,

a backing clip selector in communication with a backing clip database configured to filter a collection of backing clips to select for a backing clip corresponding with the selected parameter,

a reference selector for selecting a reference clip to act as a reference point for the modification of the first rendition,

a modification module for applying a computational process to the first rendition or the backing clip to modify an aural or visual characteristic of the first rendition or the backing clip to reduce a difference between the first rendition or the backing clip and the reference clip in the aural or visual characteristic, and

a mixing module for combining one or more renditions with the backing clip after modification,

wherein the resulting combination comprises a finished performance.

2. A software application according to claim 1 wherein the modification module is configured to locate the computational process by calling an application programming interface.

3. A software application according to claim 1 or 2 wherein the computational process comprises the modification of a sequence of note attribute values of a first rendition or a backing clip to reduce the difference between the first rendition or the backing clip and the reference clip in the selected aural or visual characteristic.

4. A software application according to any one of claims 1 to 3 wherein the first rendition comprises one or more vocal performances of one or more user recordings, and the backing clip comprises the balance of the one or more user recordings excluding the one or more vocal performances.

5. A software application according to any one of claims 1 to 4 wherein the first rendition comprises one or more vocal performances of one or more user recordings excluding any background and incidental noise present in the one or more user recordings.

6. A software application according to any one of claims 1 to 5 wherein the first rendition or backing clip and the reference clip comprise data translations characterised by note attributes values.

7. A software application according to claim 6 wherein the computational process comprises the modification of a sequence of note attributes values of the first rendition or backing clip to reduce the difference between the data translations of the first rendition or the backing clip and the reference clip in the selected aural or visual characteristic.

8. A software application according to claim 7 wherein the one or more user recordings comprise a video recording of the vocal performance.

9. A system for harmonising one or more geographically or temporally distributed renditions with at least one backing clip comprising:

a computing device for executing a software application, capturing a first rendition and communicating the first rendition to a software application, wherein the computing device further comprises;

a processor,

a memory,

a camera or microphone,

a signal transmitter,

a signal receiver, and

a user interface,

the software application further comprising;

a calibration module for selecting a parameter defining one or more aural or visual characteristics of a first rendition,

a backing clip selector in communication with a backing clip database configured to filter a collection of backing clips to select for a backing clip corresponding with the selected parameter,

a reference selector for selecting a reference clip to act as a reference point for the modification of the first rendition,

a modification module for applying a computational process to the first rendition or the backing clip to modify an aural or visual characteristic of the first rendition or the backing clip to reduce a difference between the first rendition and the reference clip in the aural or visual characteristic, and

a mixing module for combining one or more renditions with the backing clip after modification,

wherein the resulting combination comprises a finished performance.

10. A system according to claim 9 comprising a software application according to any one of claim 1 to 8.

11. A system according to claim 10 comprising an application programming interface configured to allow the execution of the computational process of claim 2, 3, 7 or 8 to modify an aural or visual characteristic of the first rendition or the backing clip to reduce the difference between the first rendition or the backing clip and the reference clip in the selected aural or visual characteristic.

12. A system according to claim 11 comprising a server containing the application programming interface of claim 11, wherein the server is configured to execute the computational process of claim 2, 3, 7 or 8.

13. A method for harmonising one or more geographically or temporally distributed renditions with at least one backing clip comprising the steps of:

selecting a reference clip to act as a reference point for the modification of the first rendition;

generating a first rendition by a user;

calibrating the first rendition to select a parameter defining one or more aural or visual characteristics of a first rendition;

selecting a backing clip from a backing clip database for combining with a first rendition;

applying a computational process to the first rendition or backing clip to modify an aural or visual characteristic of the first rendition or the backing clip to reduce a difference between the first rendition and the reference clip in the aural or visual characteristic; and

combining the first rendition, or the first rendition and more renditions, and the backing clip after modification to produce a first performance;

wherein the resulting combination comprises a finished performance.

14. A method according to claim 13 wherein the computational process comprises the modification of a sequence of note attribute values of a first rendition or a backing clip to reduce the difference between the first rendition or the backing clip and the reference clip in the selected aural or visual characteristic.

15. A method according to claims 13 or 14 wherein the computational process comprises the step of splitting one or more vocal performances from one or more user recordings, wherein the balance of the one or more user recordings excluding the one or more vocal performances remains.

16. A method according to any one of claims 13 to 15 wherein the computational process comprises the step of removing any background or incidental noise from the one or more vocal performances by recognising the one or more vocal performances within the one or more user recordings and removing all sound from the one or more user recordings that is not recognised as the vocal performance.

17. A method according to any one of claims 13 to 16 wherein the computational process comprises the step of analysing the first rendition or backing clip and the reference clip, translating them into data characterised by note attribute values, and reducing the difference between the note attribute values of the first rendition or backing clip and the reference clip.

18. A method according to claim 17 wherein the computational process comprises step of modifying a sequence of note attribute values of the first rendition or backing clip to reduce the difference between the data translations of the first rendition or the backing clip and the reference clip in the selected aural or visual characteristic.

19. A method according to claim 18 wherein the computational process comprises the step of modifying a video recording of the vocal performance.