How Generative AI is Going to Change Your Life and Mess Everything Up in the Process
These are the first marketing images of an upcoming movie that I star in with my daughter Mirren about an aging superhero that is absolutely too old for this shit that winds up having to save some mystical star child.
Or, it’s just what “generative AI” does with a series of uploaded images of the two of us if given some creative prompting. Enthusiasts will tell you that this is the beginning of being able to insert yourself into video games, recreate scenes of things that happen to you in the past, or things that didn’t happen at all. Meta has started beta testing its “Make a Video” software, no doubt trained on all the videos we’ve ever uploaded to Facebook.
Maybe this is the gateway technology that allows the metaverse to take a leap forward and for us to finally get holodecks like in Star Trek.
Critics pose a ton of ethical and legal questions.
They range from what we should actually define as art (which may not seem important, unless you’ve entered an art competition and lost to a bot) to whether or not this will put creative people out of work to who owns the rights to styles of things and how artists should be compensated if their work is used in learning algorithms to create new works. Should the Warhol foundation, owners of Andy Warhol’s IP be paid out if you decide to “paint” a bottle of Scope in the AI-trained style of his Campbell’s Soup Can?
There are broader implications than who owns what and how things are monetized. What will AI-generated images of ourselves do to our own self-image. As it is, we know that curated social networks like Instagram can have negative affects on self image. Will AI only make it worse?
These are not my forearms. I might like them to be, but the bots are being pretty generous—and also inserting their own biases. There are lots of instances where AI lightens people’s skin tone, makes people thinner, curvier in socially acceptable places, etc. How do we create tools that let people explore their own presentation of themselves without injecting all sorts of societal baggage about what’s “acceptable” or “preferable”?
This is where companies like Hidden Door (BBV portfolio co) come in. They recognize the importance of being thoughtful about the input and training processes in machine learning—especially when it comes to technologies that kids will interact with. You can’t train your ML tools on the cesspool that is the whole internet and not expect some potentially dangerous results.
That doesn’t even scrape the surface of the issues around deepfakes and generating images of others. Seemingly overnight, this open-source technology has unleashed the power for individuals to place anyone they can find pictures of in some hilarious or compromising positions, depending on their mood that day.
One of the biggest issues is the fact that this is all open source—so holding the tech itself accountable in some way is nearly impossible compared to if it was controlled by a single entity (not that we’re particularly good at that regulation either).
It’s also interesting to think about what AI will do to the creative workforce. I tend to believe that it is more likely to automate out tasks, not whole jobs—that making it easier to get from zero to one in the creative process will only enable more creative experts to work on higher order, more challenging and interesting tasks.
That being said, I’d hate to be a stock photography model these days.
What this tech definitely does is that it expands the size of the creator market. In the early days of User Generated Content, the mantra was “everyone can be a creator” and we talked a lot about the long tail of artists.
The truth is, a lot of that stuff just wasn’t very good—and quality was throttled by the limits of human talent. Now, with generative AI, creators will only be limited by their creativity and not by their directly applicable manual skillset. That’s why I’m bullish on creator tools like Highnote. I fully believe that my daughter Mirren’s crayons and Fisher Price keyboard is very quickly going to make way to her using AI assistance to create music and video in collaborative communities, expanding the notion who we define as a “creator”. As it is, I’m a paid Canva user and you definitely wouldn’t have counted me in that target demo as narrowly defined by Adobe’s core audience ten years ago.
There are a lot of parallels here to Web 2.0 and the earliest emergence of UGC. YouTube was built on the back of illegally uploaded media—and today it’s one of the biggest marketing and revenue generators for these same media companies that sued the site for billions in the early days. That’s undoubtedly going to happen here, too—only I’m not totally sure who you sue when the issue is stemming from open source software. People are going to use this software to mesh themselves into their favorite protected IP and IP owners natural instinct is going to control and protect when the cat is already out of the bag.
The best thing they can do is get on board early in the right way. They should figure out how to work with companies that enable creativity using existing IP but with some necessary guardrails on it—lest kids put some Freddy Kruger gloves on Snow White and prompt her to start slashing away at the Dwarves (again, Hidden Door is thinking a lot about this and other safety issues.)
Let’s not make the same mistakes we made with social media. As VC funds rush to fund the arms race of generative AI, they should be thinking about responsibility first—as it is not only core to the long-term viability of the technology as a business, but it’s just the right thing to do. Any investor that isn’t asking ethical and safety questions about the AI they’re investing in is not only dropping the ball as an investor but as a person as well.
It’s important that while you’re generating images of Gandhi as if he was a character in Vice City you’re actually being the change you want to see in the world.