Reflections & FAQs Re: META Trends
What’s the hardest part of the META Report?
Outside of the reading and parsing, which is incredibly taxing — “chain smoking trends” — the real difficulty is the balance of the write ups.
From how I see it: half the value of the META Trend output is the re-reporting of the collective’s existing forecasts. The other half is a personal interrogation and commentary on that re-reporting.
Reporting on the industry’s (often) tiresome trends all while attempting to elevate them and make them my own, yet maintaining their originality, is a difficult tension.
Remaining accurate in industry representation without regurgitating nonsense with my name tied to it is tough.
Despite this difficulty, I think the balance has been fair and successful.
How long does it all take?
Two months.
It was once quicker when there were less reports, and I didn’t augment the findings with NWO.ai or DALL·E, but the entire process spans weeks and weekends, originally kicking off around Thanksgiving and wrapping at the end of January.
It’s a real labor of love and a project difficult to sustain without funding, budget, or compensation. This year I (lightly) considered crowdfunding, gatekeeping or sponsorships... but ultimately decided against it. We’ll see what happens next year...
What’s the biggest flaw to the META Report?
This is actually rarely asked, but something I want to proactively share.
A significant downside to this annual exercise is “Gestalt Thinking.” Patterns come to us automatically. I see culture through my eyes. And my eyes are quite consistent.
Perhaps the reason the same trend pathways keep manifesting is because I keep seeing culture through my biased eyes year after year.
We have to wonder if another seasoned cultural researcher approached the same dataset of 550+ reported trends, that they’d identify the same META Trends as I did. Maybe? Likely.
I’m more confident that they’d also see the same things year after year (that’s their consistent Gestalt Thinking), but my identification of the META patterns may differ from theirs.
This is another exercise I’d be open to exploring.
META “competes” with all trend reports and now the rise of TikTok trend forecasters. Thoughts?
I don’t see this as a competition at all. It’s all a positive sum to me.
The barriers to entry are lower than ever for generally any profession, and I don’t think that’s necessarily a bad thing. But what these TikTok “trend forecasters” lack are the most important elements to the practice.
Many are able to describe the “what,” but fewer are able to explain the “why” nor extrapolate the “now what.”
Cultural analysis is dependent upon many other fields. Without leveraging history (how did we get here?), psychology (how do we think, feel and behave?) and sociology (how do crowds develop and act?), these forecasters miss the necessary foundation beneath what they’re reporting on — i.e. the drivers and historical context. They lack depth and ties to human nature.
And further, without a vertical, client or opportunity, they’re unable to map their findings to anything. They just report on observations without action or the so what.
So, all the power to anyone who wants to join, but realize good work is also hard work.
Have you ever answered the question: Is aggregating trend reports as a META exercise a valid methodology for accurate foresight?
No.
And I don’t think it’s even possible to do so. We can’t answer this because we're in a Catch-22. Hold on for this one...
In 2022, we learned that human ranking of the META Trends (my identification of most frequently reported trends from the reports) and the AI rankings (NWO.ai’s big data approach) yield different results.
But we don't know if this is because we’re splicing hairs of importance of the META Trends, or because there are better META Trends somewhere out there.
And if in fact better ones are out there... One, no other researcher has attempted to ID their own META Trends for us to compare (see answer above), and two, when we tried to identify other META Trends via more objective AI, the AI couldn’t come back with anything insightful.
So we're left with: a clue that something is off (rank discrepancies), but no hard answers around which of those bigger / better / more accurate META Trends exist. Because this is unanswered, we can’t really determine if this entire thing is even effective. There’s clearly value here, but is it better than leveraging just one report? Well, it’s hard to quantify a hard yes here.
The other (simpler) approach to answering this question is to reflect upon previously reported META Trends from over the years and then ask: “Has this come to fruition?” This would be a very subjective answer, but perhaps another exercise to explore, and opportunity to bring in big data.
...But would this even be worthwhile if the reports are already reporting on things already in existence? Of course their reports and this META report would be “accurate”... because what’s in it, is already here.
Which brings us to...
Isn’t the whole machine of “trend reports” simply a self-fulfilling prophecy?
Lots of reports are publishing the same “trend,” which then plants ideas in the minds of the collective consciousness, who then unconsciously makes it come true. And if enough people talk about “autonomous cars” being the next big thing, the winds of society blow to make “autonomous cars” the next big thing.
But is this actually the case?
There’s no easy answer here.
Historical sci-fi has certainly embedded visions of the future, which we’ve then manifested. Was this intentional or coincidental? Silicon Valley CEOs have also certainly had a vision (more precisely “their vision”) of a future, which then ultimately became our shared reality. “Seeding” is real.
But an important caveat here is that these trend reports are more often “now reports” than they are “future reports.” Because they hedge risk — out of fear of being incorrect, jeopardizing reputation — they report safe bets... i.e. things already in existence.
So to say these trend reports “seed futures,” may actually give them too much credit. At a minimum, they “validate” or give confidence to trends, which may in effect accelerate certain shifts already here.
Nonetheless, there is danger here in either seeding or accelerating facets of culture through these reports...
...But there’s also an opportunity. If this is in fact a self-fulfilling prophecy — where reports can help manifest change — could there be an opportunity to “hack” these reports for preferred futures?
I think so.
For example:
In 2022, when considering report mention frequency, the META Trend “Eco- Everything” (all things environmentalism) was ranked higher than the META trend “Now! Now! NOW!” (all things commerce innovation). But when analyzed and re-ranked with NWO.ai’s scoring and AI, we found that “Now! Now! NOW!” had more quantifiable cultural energy.
Sooo... while experts reported “eco- trends” more frequently than “commerce trends,” “commerce trends” actually had more quantifiable energy. But! The humans being off here maybe wasn’t a bad thing if readers of the reports came across more “eco- trends,” leveraged the reporting as inspiration, and ultimately made more eco-conscious products, campaigns and decisions.
We can possibly tip the scales for good.
We consider individual and corporate “bias” as flaws in forecasting, but when imagining preferred futures, “bias” isn’t inherently bad.
What if we flooded the market with content envisioning or “reporting” more ethical, diverse, sustainable, mindful and prosocial futures? Or what if we report on trends through a positive lens? (“Now! Now! NOW!” isn’t actually about commerce as much as it is about lifestyle optimizations to ultimately spend more time with family?) What are the outcomes of this sort of manipulation?
This brings us to the nascent concept of Ethical Trend Reporting.
...A topic for another time.