The evolution of video and audio entertainment took us from bards and roving entertainers to fixed plays supplemented by radio theater.
That was our primary evening entertainment until TV, first in black and white, and then, largely thanks to “Disney’s Wonderful World of Color,” black and white TVs were displaced by color TVs.
Television then moved from CRT-based technology and standard definition with sizes up to 25 inches to flat-screen TVs today that are mostly 4K and look to a potential 8K wave later this decade.
Content evolved on TV, from mostly live to taped shows and from sets to on-location to computer-generated imagery (CGI). However, some, like the coming “Beetlejuice” movie sequel, still prefer using sets to create a grittier image.
Each stage of evolution resulted in changes in skills as the cameras and technology evolved to various levels of automation. But the most significant change we anticipate is the move to AI-generated content. This month, two technologies emerged in beta: OpenAI’s Sora, which creates beautiful and realistic videos that currently lack sound, and ElevenLabs’ AI voice generator, which could supply realistic sound.
OpenAI’s Sora, coupled with ElevenLabs’ audio, puts us within a few years of production-quality video content. Check out these blended AI-created video clips produced without actors, writers, camera people, graphic artists, and most of the existing production crew typically tied to a TV show or movie.
While I expect this technology will initially be used mainly by individuals and upstart studios and will be focused mostly on pilots, eventually, this will be how most content is produced.
Let’s talk about the entertainment world post-AI as it will exist in the second half of the decade, particularly after the recent actor and writer contracts expire. We’ll close with my Product of the Week, the Acer Swift Edge 16 laptop, which has a near-perfect balance of technology and price.
The World of User-Driven Content
If you look at YouTube, much of the content isn’t created by companies but by individuals, some with decent production budgets.
AI will allow us to create even stronger content at a lower cost and enable users to create content uniquely interesting to them. Until the regulatory bodies catch up and enforcement becomes adequate, we will undoubtedly get more fake content that looks real. Still, the real money will be in creating content that lots of people enjoy and that is designed to be altered by those who view it.
The result would be close to #Owlkitty, where a cat is added to existing movie content but where you could replace any of the characters with anyone else you wanted — your kids, for instance. However, this is just the initial wave. After that, I see efforts separating into those who like to modify content created by others and those who want to produce the content that will be altered.
While I have no doubt that professionals who are already upset with these advancements won’t be happy with this change, it really is no different than when we moved to any other form of automation. Those who were doing the work being automated were upset because their jobs were changing dramatically or going away.
The result should be a move away from static content to content that can be infinitely altered. If you don’t like the ending of a film, you can change it, or, in the future, the streaming service will know what you prefer in a movie and create or alter movies to automatically optimize it for your interests.
However, while this will work with head-mounted displays and truly benefit products like the Apple Vision Pro, it won’t play well for groups of people with different interests. In that case, the service will look for commonalities in the group and then craft content most likely to appeal to the largest number of people in a group or those who have a say in the matter — like parents over kids.
This approach could make for some interesting family dynamics or potentially result in even more isolation between family members as, much like tablets and smartphones today, they dive into their own screen and content, and group watching for anything but sports becomes a thing of the past.
Undoubtedly we will get a lot more crap from people who are trying and failing to learn how to direct AIs to create the content they, or anyone else, want.
Much like Apple figuring out how to license digital music, the winner will likely be the company (and related content creators) that figures out how to license video content that can be modified and properly charge for it.
I think YouTube has the best chance of doing this, but Facebook and even Microsoft are in the running. Steve Jobs could have figured this out, but I think Tim Cook is too rigid in his views, and getting this right would require a lot of creativity. So, while Apple could do this, I doubt they’ll be the first and are more likely to follow someone else’s lead here.
More Options for Content Producers and Consumers
In 1966, Woody Allen released a movie titled “What’s Up, Tiger Lily?” It was a serious spy movie that Allen re-imagined as a comedy. Here is a clip.
Not only will new AI technology make movies like Tiger Lily much easier to create, but the images of the actors can also be changed so their actions and movements are more in line with the new dialog. You could completely change a movie with just directions instead of reshooting it. I expect content created by AI, without live actors, would be easier to alter than more traditional content, given the digital nature of the source.
This would blur the lines between content like video games, in which you enter the content and can interact with it, and traditional video content, which you typically watch as an audience. Remember the movie “Hardcore Henry,” which was the first movie shot in first person and looked interesting? Although, while you were placed in the head of the protagonist, you couldn’t alter the outcome.
Imagine a Hardcore Henry where you could alter outcomes; wouldn’t that be a video game? We should be able to create content that is both watchable and playable, blurring the lines irretrievably between video games and video content. In fact, it may just become a setting when you watch the movie where you decide if you want to watch or participate in the video project.
Lucid Dreams to Movies Creation
One interesting, anticipated development is the blending of technology to help people create lucid dreams and then translate those into AI-driven video content. Imagine being able to share your dreams with others — or just go back and experience the dream again and complete it.
When I have a lucid dream, I often wake up before the story I’m dreaming is complete or because of some event in the dream that forces me to wake up. It’s frustrating. I’ll lie in bed and try to finish the dream in my head. Now imagine being able to have the AI finish dreams for you so that not only do you remember dreams, but they are a ton less frustrating because they are complete.
I once wrote a short story about a job category I called “Dream Weaver.” It was about people who could dream complete movies and then, working through a publisher, turn those creations into marketable content.
That concept will become possible as we blend these AI technologies with efforts to help people direct and remember lucid dreams.
Wrapping Up
The coming technologies that will create video content and sound from stories we or an AI write will give us far more content choices than we have today and open good content creation to an ever-wider audience of creators.
I expect the people who first learn how to use these tools well to create frameworks for users to play in will do very well. I also anticipate that phase to be transitory as AIs evolve to anticipate our unique needs better as we evolve toward the coming singularity.
There will be significant efforts to slow down the advancement of this technology. However, I doubt they’ll be very effective, suggesting the content world in the future will look more like a future version of YouTube and even less like the studios and networks surrounding us today.
As with all changes, those who figure out how to roll with and monetize this change will do well. Those who fight it, probably not so much.
Acer Swift Edge 16 OLED Laptop
(Image Credit: Acer)
While companies are increasingly asking people to come back into the office, many, if not most, still allow some work-from-home. That means laptops with larger screens should be preferred for those working from home primarily or regularly because moving large monitors around the house can get old — and because when we are home, we often feel the need to watch our kids or animals to keep them out of trouble.
A 16-inch laptop is the perfect size for someone to use in the office, home, or school. It’s small and light enough to be portable, but the 16-inch screen provides sufficient real estate, so you don’t feel as compromised as on a smaller screen.
Of the 16-inch laptops, the most interesting one I’ve seen is the Acer Swift Edge 16, which has one of the first NPUs (anticipating desktop AI) along with decent CPU and GPU capabilities, all from AMD, which has performed impressively of late.
The laptop has an OLED display with over a billion colors, incredibly deep blacks, and a 120 Hz refresh rate, which is good enough for most gaming and certainly great for video content (AI-created or not).
(Image Credit: Acer)
At under 3 pounds, it is light for something of this size, and it is one of the first laptops with Wi-Fi 7. It sports the Microsoft Pluton security chip for security, which provides a higher level of protection than most laptops today. Fully configured, the Swift Edge 16 is still under $1,500.
One shortcoming is that it only has around six hours of battery life, but for a station-to-station laptop — which is what you need when you are working in the office or at home where you have plugs — that’s fine.
The Acer Swift Edge 16 may be the perfect notebook for those working from the office and home, and it is my Product of the Week.
Read the full article here