Skip to content
English
GBP
When will AI generated videos actually become usable for marketing?

When will AI generated videos actually become usable for marketing?

This is a popular question that seems to be typed into Google, Brave, Arc, Bing and all the other receptive vessels where we seek advice for improving our daily lives.
Well let's start with Sora, where all the buzz is at. They're teasing us with their new AI video tool, and they're not the only player in the market. With that said (and like all first releases) they may be prone to being a little glitchy and selective with the usual AI biases (blonde, blue eyes etc...) which will need correcting or maybe I'm wrong.
BUT, before we even consider video, still image generation still has a lot of room for improvment; sure Adobe Firefly, Dalle and Stable Diffusion can create stunning results, but they require a very large amount of human intervention to really get perfection. I'm by no means trying to play these tools down as it's truly incredible witnessing what they can generate, but videos are essentially a sequence of still images pieced together so these existing capabilities are my (possibly naive) benchmark for video success.
Going back to the original question - In terms of being 'useable' then maybe in a year, but in terms of being effective, I'd say 2-3 years plus. That's my prediction and I'll explain some reasons why, below...
  1. Accuracy and Reliability: AI video tools rely on algorithms that may not always be accurate or reliable. Mistakes can occur in tasks such as facial recognition, object detection, or scene understanding, leading to erroneous results.

  2. Bias and Fairness: AI algorithms can inherit biases present in the data they were trained on. This can lead to unfair or discriminatory outcomes, especially in tasks like facial recognition where certain demographics may be misclassified more often.

  3. Privacy Concerns: Video processing often involves analyzing personal data, such as faces and behaviors. This raises privacy concerns, especially if the data is not properly anonymized or if consent is not obtained from individuals appearing in the videos.

  4. Security Risks: AI video tools can be vulnerable to attacks such as adversarial inputs, where small, carefully crafted changes to the input can cause the AI to make significant errors. This could be exploited to manipulate or deceive the system.

  5. Ethical Dilemmas: The use of AI video tools raises ethical questions, particularly regarding surveillance, consent, and the potential for misuse. For example, widespread deployment of facial recognition technology without proper regulation can infringe on individuals' rights to privacy and freedom of movement.

  6. Dependency and Autonomy: Relying too heavily on AI video tools can lead to a loss of human oversight and autonomy. Humans may become overly dependent on AI for decision-making, leading to complacency or the abdication of responsibility.

  7. Legal and Regulatory Challenges: The rapidly evolving nature of AI technology can outpace existing legal and regulatory frameworks. This creates uncertainty around issues such as liability for AI-generated content or the legality of certain uses of AI video tools.

  8. Job Displacement: Like other forms of automation, AI video tools have the potential to displace human workers in certain industries, particularly those involving repetitive or mundane tasks. This can lead to unemployment or the need for workers to acquire new skills.

Addressing these issues requires careful consideration of the ethical, legal, and societal implications of AI video tools, as well as ongoing efforts to improve the accuracy, fairness, and transparency of these technologies. I believe we're living in a time of vast grey areas where government policies are playing catch-up and AI is evolving at a faster rate, making this very hard.

Please let me know what you think below!