Let’s talk about this in depth here. It’s going to change the way we all live, and has already been finding it’s way into every company’s reports and plans for the future. I’ve discussed my opinion on this in my journal (I’ll link later) and would love to get a discussion going on this to figure out how to best have us capitalize on this transformative innovation.
Easiest way is to start talking about AI can be the stock AI per VC. For me this stock runs on keywords and even the name is AI… but, they recently had earnings which was decent and the stock has ran a few days after they announced. I think they announced earnings early because their stock was in a downward spiral. IV is almost always high so this is tough to play puts and the calls are much better. I did call it out in the low 20’s but didnt like how pumpy it was from runs before. 28/30 recent resistance but at this point it will latch onto SPY or rise/fall with a new catalyst.
There are a lot of articles this week about using AI for disease detection (before symptoms), how the future may include AI doctors, using AI to analyze medical images, and how it may affect biotechs going forward. It feels a little like we’re at the very earliest stages of living in a Star Trek episode.
I think that if AI crosses the threshold into understanding, the implications are huge. I can’t say whether it will happen in 5 or 10 years or if at all. Noam Chomsky, the godfather of modern linguistics, called ChatGPT “high-tech plagiarism.” In this NYT piece, he states:
But ChatGPT and similar programs are, by design, unlimited in what they can “learn” (which is to say, memorize); they are incapable of distinguishing the possible from the impossible. Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility. Whereas humans are limited in the kinds of explanations we can rationally conjecture, machine learning systems can learn both that the earth is flat and that the earth is round. They trade merely in probabilities that change over time.
For this reason, the predictions of machine learning systems will always be superficial and dubious. […] The correct explanations of language are complicated and cannot be learned just by marinating in big data.
True intelligence is also capable of moral thinking. This means constraining the otherwise limitless creativity of our minds with a set of ethical principles that determines what ought and ought not to be (and of course subjecting those principles themselves to creative criticism). To be useful, ChatGPT must be empowered to generate novel-looking output; to be acceptable to most of its users, it must steer clear of morally objectionable content. But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
This is where I disagree with all these naysayers most. A tool doesn’t need to be useful because it can think for itself, it just needs to be able to do what it’s asked to do better than if you didn’t use it. In order for AI to be useful, it has a pretty low bar to meet (currently), and you could argue that it already has. There are higher level functions that it can’t do yet, but people have already replaced their old tools with it and seem happy to find new uses for this new tool, so I can’t disagree stronger with this narrow vision of what “success” for AI will look like. It may, at some point in the future meet this lofty ambition, but for now we’re happy if it can write our essays and make digital images for us.
Adding here that the stocks we’ve discussed relating to the AI bubble are AI, NVDA, MSFT & PLTR.
I’m going to add CHAT to the list. New etf (as of today) exclusively for AI investment holdings.
So Noam is correct in his understanding of the technology being a form of “plagiarism” in the sense that it only knows what it’s seen and isn’t capable of creating new concepts or ideas. It’s basically like a google search that talks to you in a way.
I agree with Hank that he seems to accidentally equate this to being useless, which it absolutely is not.
I guess on a more theoretical ground, in my opinion he’s putting far too much weight on AI’s “moral thinking” as though that would make it more intelligent when in my opinion the opposite is true. We routinely think immorally to help aid our perspectives on things and preventing AI from doing the same by curtailing what it’s exposed to doesn’t create a smarter AI, it creates a more useless one. OpenAI’s attempts to “safeguard” ChatGPT’s responses have rendered it effectively useless for a lot of conversations.
So I wanted to comment briefly on PLTR. We learned in their earnings call that they actually aren’t attacking making their own generative AI model but instead are focusing on developing a frontend for other models that are created. They cited the leaked GOOGL memo about how OpenSource is outpacing other established company’s development of these models as their reasoning.
All valid points.
Perhaps my bias (and Chomsky’s) is concern with the threat against high-skill jobs as professors, lawyers, accountants, and so on. There are arguments AI does not possess common sense and may never versus claims that AI already possesses common sense in the form of "Artificial General Intelligence."
Its easy to foresee takeovers of AI robots in lower-skill professions such as cashiers, baristas, warehouse workers, and grocery store clerks. High hazard professions such as miners and deep-sea oil rig roles are also on the chopping block. I also think software engineers are also in trouble long-term, provided AI can translate natural language into machine code effectively.
Also, sex dolls are going to get more interesting (Valerie 23 anyone?).
I’ll have to do a deeper dive on Hyundai (HYMTF), which owns Boston Dynamics, iRobot (IRBT), and some others under the ROBO ETF.
I think you and Chomsky are actually in agreement on the idea of “moral thinking”. He agrees that because ChatGPTs creators have chosen to shelter it from objectionable content they have hindered its ability to contribute to controversial (i.e. important) discussions. I don’t think he’s arguing that a truly intelligent AI would only be able to think on the good side of morality, or ought to be sheltered from “immoral” input, only that in the absence of AI’s own innate ability to establish ethics and reason morally its ability to “solve” human problems (rather than merely contribute data to decisions about them) will be severely limited.
To your and Hank’s point, none of this means AI is useless, but I think what Chomsky is responding to is the (in his view) overzealous response in media and markets to what you yourself correctly labeled as a “google search that talks to you”.
At the moment ChatGPT and BARD are nowhere near AGI. This being said we already know that there isn’t a need for cashiers, for example AMZN grocery stores that use A.I. to track you and you only get charged with what you take. A coworker made a good point in that this current generation of A.I. is more like an advanced database that has the ability to quickly piece together response data from vast inputs rather than an actual A.I. that would have “learned” all this information and is providing an AGI answer based on its interpretation of that information. And before we get too dystopian remember that there will always be a group of people that prefer people to do these tasks. The “rally against the machine” people and some places like coffee shops, bars, restaurants, etc will start charging a premium for the human to human experience. The same can be said for the initial robot AGI experience as well. Again, A.I. right now is on its hype train. We are in the early Bitcoin hype days of its cycle. Remember that A.I has been around for a while but it’s just now that we are hearing about it everywhere cause of generative A.I ease of use.
A.I. Stocks we should keep an eye on according to TD Cowen analyst John Blackledge. Going to dove into these more in a little bit
I think ADBE, ORCL and CRWD are well positioned to take advantage of the early AI iterations and really take over their respective markets, and I’ve invested long-term in each because of that. There are a few on that list I’ll need to look into more. Always good to get a good idea of potential plays on the horizon.
In this type of game, you really do want to be first to market, but if you can’t be, then you better be the best. We’re still waiting for the firsts in a lot of these market segments, and the bests won’t be for a good while, so there are likely many players not on our lists that will come up over the next year or two.
I feel like, with the overall market sentiment right now, and where NVDA is positioned with this movement, it’s earnings today are more important than usual for this bubble. A bad earnings showing will likely affect every AI player, and might slow the adoption of AI solutions into additional companies future outlooks (and we’d likely hear it less on calls). A good earnings showing (regardless of the numbers…) would mean we’re past the inflection point and probably moving into the second phase of adoption (where people start taking it seriously and feel fomo).
I agree with this. Only thing is like <@768562231964860426> said in last week’s call. Everyone who has shorted NVDA the past 6 months has gotten burned. I think it will be interesting to see if the stock reflects the actual earnings call or if FOMO kicks in and no matter what is said it just rockets up because it’s had a sell off recently. I might take a few puts on AI as downside protection but it seems that NVDA is the driver of AI sentiment at the moment and if that wavers for any reason then the market will probably overreact to it.
having a hot second to digest NVDA’s absolutely destroying earnings expectations, and their call just being all boss, I think it’s clear that we’re going to be seeing significant fomo starting now. we really need to figure out what other opportunities are available running parallel to NVDA for the community to rotate this win into.
As I said last week. Don’t look for the winning racehorse. Buy the people building the race track components
AI have earnings on 5/31, im thinking of easing into some 6/2 or 6/9 calls.