You are currently browsing the tag archive for the ‘artificial-intelligence’ tag.
Is Nvidia a Bubble?
By Shlomo Maital

2024: $1 tr. Nvidia market cap. 2025 $5 tr.
Nvidia has become the first company whose market value of its shares reached $5 trillion. $5 trillion! That is larger than the national GDP’s of every country except the US, UK and Germany (close tie with Germany!).
Is this a bubble?
Nvidia’s shares have risen by five times, from $1 trillion just over a year ago. The reason is clear. Nvidia’s Jensen Huang made a huge bet on AI chips…and thanks to his acquisition of Israeli startup Mellanox, founded by Eyal Waldman, his bet paid off. Nvidia has the chips that AI desperately needs… right at the moment when they were most needed. Mellanox supplied software that speeds up calculations (originally, used to speed up uploads and downloads on the Internet), which Huang cleverly realized could speed up calculations on microprocessors.
So is it a bubble? Probably. Any share that rises by five times is a bubble. Can Nvidia sustain its growth? Especially when the US is trying to shut off the Chinese market for Nvidia, a market of the world’s second largest economy.
Every sharp rise in shares is driven by future expectations and dreams. Nvidia’s 2025 profits rose to about $80 b., from around $28 billion in 2024. That’s a big rise. But a stock valuation of some 60 times net income is very high. It is a PE price-earnings ratio of 60, astronomically high.
US stock markets are now at all time highs. It is not just Nvidia. It is time to think carefully if you are in the market. There are enormous storm clouds gathering.
Key Future Technologies: The View from McKinsey
By Shlomo Maital

Innovation Score (Y axis) vs. Interest (Investment) (X axis)
In the global consulting company McKinsey’s latest survey of technology trends for 2025, the above graph caught my eye. On the X Axis: a measure of interest, as measured by total investment. On the Y axis, a measure of the degree of innovation.
Normally, graphs for previous years showed a scattering of interest among several key topics. This is natural. Hi-tech is risky and businesses generally do not ‘play poker’ by putting all their chips on one single new technology.
Except for now. The graph shows Artificial Intelligence as leading by far in both innovation and in resource bets, globally.
But take a look at #2. Future of energy and sustainable technologies. Less innovation, far more resources. Globally.
The cost of solar energy is today the cheapest of the varied forms of energy.
Can one understand how the Trump administration is single-mindedly revoking tax credits, laws, regulations, everything, designed to promote solar and wind energy? Yes – as a short-sighted political deal based on the huge contributions of Big Oil to the Trump PACs. Big Oil spent $445 million (nearly half a billion dollars) on contributions to Trump, in the last election.
A terrific investment, given how Trump has repaid it. The McKinsey graph for the US would show it as a backward, lagging outlier in energy.
In this, as well as in science, healthcare, social services, education, research, vaccines, and global aid programs, Trump has set America back decades, in a very short time. The damage can be repaired – but the road to do so will be long and hard.
How to Get AI to Do What You Want
By Shlomo Maital

As I and many others have learned, the hard way, you can get some great output from AI – but it is crucial to ask for it in just the right words. Kind of like getting kids to do what you want. The key is the wording of your ‘prompt’ – what you ask for and how you ask for it.
I found a very helpful post by Eva Keiffenheim on the MEDIUM website. I will try to summarize and shorten it.
Level One: Five Ingredients of a Strong Prompt. Here is a mnemonic to help remember it. Tall Cats Read Every Issue. T – task. C – Context. R – references. E – Evaluate. I – Iterate.
Task: Start with a persona, then a clear verb, then a specific output format. E.g. “As a cognitive scientist, explain long-term retention. Present the findings in a table, with columns for … etc. etc.
Context: Details needed? Your end goals? Your desired impact? E.g. “Make cognitive science approachable, no jargon, use tangible examples.”
References: Give AI examples to mimic for tone, structure, style. E.g. “Use a tone similar to this excerpt from ….. etc.”
Evaluate: Is this result useful? Paste the received output into a fact-checking plugin. Is anything missing or incorrect? Does it meet my goal?
Iterate: Tweak and improve. Refine until the output meets or exceeds your needs. Prompting equals iterating.
Level Two. Use These Four Techniques:
- Simplify. AI likes simplicity. Use clean, short digestible steps.
- Shift perspective. Instead of telling AI “you’re a cognitive scientist..”, try telling it – “you’re a science journalist seeking to…”
- Modify the language. If you don’t get great results, change the phrasing, tone, and structure. (I’ve found AI likes praise, and a friendly tone).
- Impose constraints. AI likes to have limits. 5 book titles, 5 words each for summaries, etc.
Mnemonic: Sister Suzie played Many Long Concertos. Simplify Shift perspective Modify language Constraints.
Level 3. Advanced Prompting.
Treat AI like a teammate. Prompting is like building blocks..start simple, add layers. Turn tasks into bullet points.
And perhaps the best tip of all: “Add this phrase to your prompt: Explain your reasoning step by step before answering.” Then, use ‘tree of thought’ – get AI to explore several reasoning paths.
Ask AI to write better prompts for you. E.g. “AI – act as a prompt engineer. Write a prompt that generates 10 creative but practical startup ideas in the [xxxx[ space.” Remember: Prompting = Thinking. Clarify your thinking – mine usually begins fuzzy, and badly needs focus and sharpening. Fuzzy prompts = fuzzy AI responses.
Hope this is helpful. Thanks, Eva Keiffenhaim!
Abduction: How AI Thinks: Like We Do!
By Shlomo Maital

How does AI think? Until now, we haven’t really understood this; AI is often referred to as a ‘black box’ even by its inventors and developers.
Now, in a New York Times article (Dec. 16), Peter Coy explains some research, in which researchers did a brilliant and rather obvious thing – They asked AI to tell us how it is thinking! And here is the result:
AI thinks as we do. By abductive reasoning!
What is that? So, naturally, I asked ChatGPT to explain it:
“Abductive reasoning is a logical process in which an explanation is inferred from the available evidence, even though it may not be the only possible explanation. It’s often described as “inference to the best explanation” because it seeks to identify the most likely cause or reason for a set of observations, even if all the facts are not known.
“Unlike deductive reasoning, which moves from general principles to specific conclusions (certain), and inductive reasoning, which involves drawing broad generalizations from specific instances (probable), abductive reasoning aims to find the most plausible explanation given incomplete information.
In other words: Hey? Don’t know the answer?! Well – take a shot at it. Take a guess.
ChatGPT’s example of abductive reasoning:
You walk into a room and see a puddle of water on the floor. Possible explanations (hypotheses): 1. Someone spilled water 2. The roof is leaking. 3. The window was left open and rain came in. Using abductive reasoning, you consider the available evidence (the puddle of water) and evaluate the most plausible cause. If you know that there was a heavy rainstorm earlier, and the window is slightly open, you might conclude that the rain came in through the open window, even though you can’t see the exact cause. Based on the best explanation, you might infer that the open window is the most likely reason for the puddle of water, even though there could be other potential causes.”
AI is often lambasted, criticized, because it fantasizes – makes wild guesses when it doesn’t know the answer. Well, so do we humans! Abductive reasoning, connecting the dots that seem unconnected, often generates powerful innovative ideas. AI makes inferences, sometimes wild ones. Programmers are striving to ‘cleanse’ AI from this – when it may be one of its most powerful advantages!
So — the one thing we thought AI lacked, creativity, is actually something it does have. Abduction, linking X with Y.
Holy smoke!
The Cost of Losing Human Interaction
By Shlomo Maital

Last night on Israeli TV news, three small children were shown sitting in kindergarten chairs next to one another; each was playing a game on his or her tablet. Someone came in with a tray of their favorite candy and put it on the table right in front of them. None of the three lifted their eyes from the tablet. When they were invited to come to the table and enjoy the candy (with their tablets), they were told that they had to turn off their tablets in order to partake. Two of the three refused, choosing to continue playing with their tablets rather than enjoy the candy. Normally, three kids sitting together begin to talk and interact. Not these three, absorbed with their plasma screens.
Do we really understand the hypnotic power that plasma screens have over us?
In today’s New York Times, Jessica Grose reports on some disturbing research. The title is: Human Interaction is now a luxury good. The key point: As AI and digital software are increasingly employed to boost productivity and cut costs, human services become a high cost luxury item only the wealthy can afford.
Grose cites a new book “The Last Human Job,” by the sociologist Allison Pugh. She spent five years following teachers, doctors, community organizers and hairdressers — more than 100 people in total who perform what she calls “connective labor,” which is work that requires an “emotional understanding” with another person. Even when human services are indeed offered and provided, the bureaucratic tangle that requires them to account for what they do digitally, constantly, is a huge butden and interferes with human interaction. (Ask doctors who fill out Medicare forms).
“Pugh explains that increasingly, people in these jobs have to use technology to obsessively monitor and standardize their work, so that they might be more productive and theoretically have better (or at least more profitable) outcomes.”
A vivid example in Pugh’s book was the hospital chaplain, who provided crucial spiritual comfort – but still had to report online, endlessly, in detail — because God too is an accounting cost.
Conclusion: A paradox. As we are addicted to plasma screens at an early age, we come both to rely on them and to distrust them, because …. The services they provide are inhuman, non-human. And it is this, perhaps, that can help account for the collapse in trust in such institutions as doctors, public health, police, judges, and more than ever, the political democratic system. Real human interaction becomes a luxury good only the rich can afford.
I don’t know how to escape this quandary. As far right politicians ascend, and attack government and slash budgets, evermore services will be digitized and non-humanized, leading to further loss of trust.
Something has to break this spiral.

