Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020118273 - TRIP-CONFIGURABLE CONTENT

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

[ EN ]

CLAIMS

1. A computer-implemented method for personalizing a vehicle including one or more sensory output devices communicatively coupled to a server, the method comprising:

selecting, by one or more processors, sensory content for delivery to a passenger in the vehicle based on preference data and geographic location data for a passenger of the vehicle; and

delivering the sensory content to the at least one of the vehicle or the one or more sensory output devices.

2. The method of claim 1 , further comprising:

receiving, by one or more processors, a signal indicating at least one of an identity or passenger profile of the passenger in or boarding the vehicle; and

accessing, by the one or more processors, the preference data and geographic location data for the passenger.

3. The method of claim 2, further comprising upon receiving the signal accessing a database of configuration information for the vehicle.

4. The method of claim 3, further comprising dispatching the vehicle selected from a set of different available vehicles, based on the sensory content.

5. The method of claim 3, further comprising selecting the vehicle based on installed hardware for entertainment consumption.

6. The method of claim 1 , further comprising detecting a mood of the passenger based on one or more biometric indicators, wherein the selecting is further based on the passenger’s mood.

7. The method of claim 1 , wherein the sensory content comprises electronic media content and the delivering comprises providing the electronic media content to the vehicle before the passenger enters the vehicle.

8. The method of claim 7, further comprising selecting a second passenger to share the vehicle based on matching an interest of the second passenger to the electronic media content.

9. The method of claim 1 , wherein the sensory output device comprises an optical projector, the selecting further comprises selecting a still or video image for projecting to a display surface in or on the vehicle.

10. The method of claim 9, wherein the still or video image is selected for projecting onto clothing, and the delivering further comprises projecting the still or video image onto clothing of the passenger.

11. The method of claim 1 , wherein the selecting the sensory content further comprises selecting content simulating at least one of a vehicle driver or fellow passenger.

12. The method of claim 11 , further comprising generating the sensory content comprising a simulated personality based on a fictional character.

13. An apparatus for personalizing a vehicle, the apparatus comprising at least one processor coupled to a memory, the memory holding encoded instructions that when executed by the at least one processor causes the apparatus to perform: selecting, by one or more processors, sensory content for delivery to a passenger in the vehicle based on preference data and geographic location data for a passenger of the vehicle; and

delivering the sensory content to the at least one of the vehicle or the one or more sensory output devices.

14. The apparatus of claim 13, wherein the memory holds further instructions for:

receiving a signal indicating at least one of an identity or passenger profile of the passenger in or boarding the vehicle; and

accessing, by the one or more processors, the preference data and geographic location data for the passenger.

15. The apparatus of claim 14, wherein the memory holds further instructions for accessing a database of configuration information for the vehicle in response to receiving the signal.

16. The apparatus of claim 14, wherein the memory holds further instructions for dispatching the vehicle selected from a set of different available vehicles, based on the sensory content.

17. The apparatus of claim 14, wherein the memory holds further instructions for selecting the vehicle based on installed hardware for entertainment consumption.

18. The apparatus of claim 13, wherein the memory holds further instructions for detecting a mood of the passenger based on one or more biometric indicators, wherein the selecting is further based on the passenger’s mood.

19. The apparatus of claim 13, wherein the memory holds further instructions for selecting the sensory content comprising electronic media content and for providing the electronic media content to the vehicle before the passenger enters the vehicle.

20. The apparatus of claim 19, wherein the memory holds further instructions for selecting a second passenger to share the vehicle based on matching an interest of the second passenger to the electronic media content.

21. The apparatus of claim 13, wherein the sensory output device comprises an optical projector and the memory holds further instructions the selecting at least in part by selecting a still or video image for projecting to a display surface in or on the vehicle.

22. The apparatus of claim 21 , wherein the memory holds further instructions the selecting the still or video image for projecting onto clothing, and for the delivering at least in part by projecting the still or video image onto clothing of the passenger.

23. The apparatus of claim 13, wherein the memory holds further instructions for the selecting the sensory content at least in part by selecting content simulating at least one of a vehicle driver or fellow passenger.

24. The apparatus of claim 23, wherein the memory holds further instructions for generating the sensory content comprising a simulated personality based on a fictional character.

25. A computer-implemented method for producing video customized for a preference profile of a person or cohort, the method comprising:

maintaining a data structure of video clips suitable for including in a video;

associating each of the video clips with a set of characteristic parameters relating to user-perceivable characteristics;

receiving user profile data relating to a person or group of people via a computer network;

selecting preferred video clips from the data structure based at least partly on the user profile data;

automatically producing a video including the preferred video clips; and

providing the video to a video player device operated by the person or by at least one of the group of people.

26. The method of claim 25, wherein the user profile data relates to the group of people, and further comprising determining membership in the group based on demographic parameters.

27. The method of claim 25, wherein the user profile data relates to the group of people, and further comprising determining membership in the group based on history of digital content consumption.

28. The method of claim 25, wherein associating each of the video clips with the set of characteristic parameters comprises associating each of the video clips with data indicating compatibility with adjacent clips.

29. The method of claim 28, wherein the data indicating compatibility with adjacent clips identifies one of a sequence number or sequence group number for the clip.

30. The method of claim 29, wherein automatically producing the video comprises placing selected clips in one of a sequence order.

31. The method of claim 25, wherein associating each of the video clips with the set of characteristic parameters comprises associating each of the video clips with a parameter indicating at least one of a clip length, an actor’s identity, an actor’s dialog, a pace, a mood, technical format, a color temperature, a scene position, an intensity, or special effects metric.

32. The method of claim 25, wherein the video is a trailer promoting entertainment content.

33. The method of claim 25, further comprising developing clip selection criteria at least in part by correlating viewer response metrics to the video clip parameters and user profile data using a predictive analytics algorithm.

34. The method of claim 33, further comprising supplying sample input and output data to the predictive analytics algorithm, wherein the sample input comprises combinations of video clip parameters for one or more videos and user profile data, the sample output comprises the viewer response metrics after viewing the video, and the output identifies positive correlations between sample input, user profile and desired viewer response.

35. The method of claim 33, wherein selecting the preferred video clips is performed in part by automatically applying the clip selection criteria and in part by manual selection.

36. An apparatus for personalizing a vehicle, the apparatus comprising at least one processor coupled to a memory, the memory holding encoded instructions that when executed by the at least one processor causes the apparatus to perform: maintaining a data structure of video clips suitable for including in a video;

associating each of the video clips with a set of characteristic parameters relating to user-perceivable characteristics;

receiving user profile data relating to a person or group of people via a computer network;

selecting preferred video clips from the data structure based at least partly on the user profile data;

automatically producing a video including the preferred video clips; and

providing the video to a video player device operated by the person or by at least one of the group of people.

37. The apparatus of claim 36, wherein the user profile data relates to the group of people and the memory holds further instructions for determining membership in the group based on demographic parameters.

38. The apparatus of claim 36, wherein the user profile data relates to the group of people and the memory holds further instructions for determining membership in the group based on history of digital content consumption.

39. The apparatus of claim 36, wherein the memory holds further instructions for associating each of the video clips with the set of characteristic parameters at least in part by associating each of the video clips with data indicating compatibility with adjacent clips.

40. The apparatus of claim 39, wherein the data indicating compatibility with adjacent clips identifies one of a sequence number or sequence group number for the clip.

41. The apparatus of claim 40, wherein the memory holds further instructions for automatically producing the video at least in part by placing selected clips in one of a sequence order.

42. The apparatus of claim 36, wherein the memory holds further instructions for associating each of the video clips with the set of characteristic parameters at least in part by associating each of the video clips with a parameter indicating at least one of a clip length, an actor’s identity, an actor’s dialog, a pace, a mood, technical format, a color temperature, a scene position, an intensity, or special effects metric.

43. The apparatus of claim 36, wherein the memory holds further instructions for developing clip selection criteria at least in part by correlating viewer response metrics to the video clip parameters and user profile data using a predictive analytics algorithm.

44. The apparatus of claim 43, wherein the memory holds further instructions for supplying sample input and output data to the predictive analytics algorithm, wherein the sample input comprises combinations of video clip parameters for one or more videos and user profile data, the sample output comprises the viewer response metrics after viewing the video, and the output identifies positive correlations between sample input, user profile and desired viewer response.

45. The apparatus of claim 43, wherein the memory holds further instructions for selecting the preferred video clips at least in part by automatically applying the clip selection criteria and in part by processing manual selection input.