Radar Trends to Watch: February 2023 – O’Reilly

This month’s news seems to have been derailed by the three-ring circus: Musk and Twitter, Musk and Tesla, and SBF and FTX. That said, there are a lot of important things happening. We usually don’t say much about computing hardware, but RISC-V is gathering steam. I’m excited by Ion Stoica’s vision of “sky computing,” which is cloud-independent. A similar but even more radical project is Petals, which is a system for running the BLOOM large language model across a large number of volunteer hosts: cloud-free cloud computing, which the authors liken to Bittorrent. There’s been a lot of talk about decentralization; this is the real thing. That model for large-scale computation is more interesting, at least to me, than the ability to run one specific language model.

Artificial Intelligence

  • Adversarial learning tries to confuse machine learning systems by giving them altered input data, tricking them into giving incorrect answers. It is an important technique for improving AI security and accuracy.
  • We all know about AI-generated text, voices, and art; what about handwriting? Calligrapher.ai is a handwriting generator. It’s nowhere near as flexible as tools like Stable Diffusion, but it means that ChatGPT can not only write letters, it can sign them.
  • ChatGPT has been shown to be good at explaining code. It’s also good at re-writing code that has been intentionally obfuscated in a clear, human-readable version. There are clear applications (not all of them ethical) for this ability.
  • Who needs a database for an app’s backend? For that matter, who needs a backend at all? Just use GPT-3.
  • Reinforcement learning from human feedback (RLHF) is a machine learning training technique that integrates humans into the training loop. Humans provide additional rewards, in addition to automated rewards. RLHF, which was used in ChatGPT, could be a good way to build AI systems that are less prone to hate speech and similar problems.
  • Demis Hassabis, founder of DeepMind, advises that humans be careful in adopting AI. Don’t move fast and break things.
  • A group of researchers from Google has published a Deep Learning Tuning Playbook on Github. It recommends a procedure for hyperparameter tuning to optimize the performance of Deep Learning models.
  • Anthropic, a startup founded by former OpenAI researchers, has created a chatbot named Claude with capabilities similar to ChatGPT.  Claude appears to be somewhat less prone to “hallucination” and hate speech, though they are still issues.
  • Satya Nadella has tweeted that Microsoft will offer ChatGPT as part of Azure’s OpenAI service. It isn’t clear how this (paid) service relates to other talk about monetizing ChatGPT.
  • One application for ChatGPT is writing documentation for developers, and providing a conversational search engine for the documentation and code. Writing internal documentation is an often omitted part of any software project.
  • AllenAI (aka AI2) has developed a language model called ACCoRD for generating descriptions of scientific concepts. It is unique in that it rejects the idea of a “best” description, and instead creates several descriptions of a concept, intended for different audiences.
  • A researcher trained a very small neural network to do binary addition, and had some fascinating observations about how the network works.
  • OpenAI is considering a paid, “pro” version of ChatGPT. It’s not clear what additional features the Pro version might have, what it would cost, or whether a free public version with lower performance will remain. The answers no doubt depend on Microsoft’s plans for further integrating ChatGPT into its products.
  • ChatGPT can create a text adventure game, including a multi-user dungeon (MUD) in which the other players are simulated. That’s not surprising in itself. The important question is whether these games have finite boundaries or extend for as long as you keep playing.
  • A startup has built a truth checker for ChatGPT. It filters ChatGPT’s output to detect “hallucinations,” using its own AI that has been trained for a specific domain. They claim to detect 90% of ChatGPT’s errors in a given domain. Users can add their own corrections.
  • Andrej Karpathy has written nanoGPT, a very small version of the GPT language models that can run on small systems–possibly even on a laptop.
  • Petals is a system for running large language models (specifically, BLOOM-176B, roughly the size of GPT-3) collaboratively. Parts of the computation run on different hosts, using compute time donated by volunteers who receive higher priority for their jobs.
  • Having argued that we would eventually see formal languages for prompting natural language text generators, I’m proud to say that someone has done it.
  • DoNotPay has developed an AI “lawyer” that is helping a defendant make arguments in court. The lawyer runs on a cell phone, through which it hears the proceedings. It tells the defendant what to say through Bluetooth earbuds. DoNotPay’s CEO notes that this is illegal in almost all courtrooms. (After receiving threats from bar associations, DoNotPay has abandoned this trial.)
  • Perhaps prompted by claims that Google’s AI efforts have fallen behind OpenAI and others, Google has announced Muse, which generates images from text prompts. They claim that Muse is significantly faster and more accurate than DALL-E 2 and Stable Diffusion.
  • Microsoft has developed an impressive speech synthesis (text-to-speech) model named VALL-E. It is a zero-shot model that can imitate anyone’s voice using only a three-second sample.
  • Amazon has introduced Service Cards for several of their pre-built models (Rekognition, Textract, and Transcribe). Service cards describe the properties of models: how the model was trained, where the training data came from, the model’s biases and weaknesses. They are an implementation of Model Cards, proposed in Model Cards for Model Reporting.
  • The free and open source BLOOM language model can be run on AWS. Getting it running isn’t trivial, but there are instructions that describe how to get the resources you need.


  • How do you use the third dimension in visualization? Jeffrey Heer (one of the creators of D3) and colleagues are writing about “cinematic visualization.”
  • SkyPilot is an open source platform for running data science jobs on any cloud: it is cloud-independent, and a key part of Ion Stoica’s vision of “sky computing” (provider-independent cloud computing).


  • An annotated field guide to detecting phishing attacks might help users to detect phishes before they do damage. According to one study from 2020, most cyber attacks begin with a phish.
  • Docker security scanning tools inspect Docker images for vulnerabilities and other issues. They could become an important part of software supply chain security.
  • Browser-in-browser phishing attacks are becoming more common, and are difficult to detect. In these attacks, a web site pops up a replica of a single sign-on window from Google, Facebook, or some other SSO provider to capture the user’s login credentials.
  • We’re again seeing an increase in advertisements delivering malware or attracting unwary users to web sites that install malware. Ad blockers provide some protection.
  • Amazon has announced that AWS automatically encrypts all new objects stored in S3. Encrypted by default is a big step forward in cloud data security.
  • The Python Package Index (PyPI) continues to suffer from attacks that cause users to install packages infected with malware. Most notably, the PyTorch nightly build was linked to a version that would steal system information. Software supply chain problems continue to plague us.
  • Messaging provider Slack and continuous integration provider CircleCI were both victims of attacks and thefts of software and data. The companies haven’t been forthcoming with details, but it seems likely that CircleCI has lost all customer secrets.


Chips and Chip Design

  • A new generation of processors could use vibration to generate a flow of air through the chip, providing cooling without the need for fans. The developers are collaborating with Intel and targeting high-end laptops.
  • Google wants RISC-V to become a “tier-1” chip architecture for Android phones, giving it the same status as ARM. There is already a riscv64 branch in the source repository, though it’s far from a finished product.
  • Ripes is a visual computer architecture simulator for the RISC-V. You can watch your code execute (slowly). It’s primarily a tool for teaching, but it’s fun to play with.


  • Boston Dynamics’ humanoid robot Atlas now has the ability to grab and toss things (including awkward and heavy objects).  This is a big step towards a robot that can do industrial or construction work.
  • Matter, a standard for smart home connectivity, appears to be gaining momentum. Among other things, it allows devices to interact with a common controller, rather than an app (and possibly a hub) for each device.
  • Science fiction alert: Researchers have created a tractor beam! While it’s very limited, it is capable of pulling specially constructed macroscopic objects.
  • A new catalyst has enabled a specialized solar cell to achieve 9% efficiency in generating hydrogen from water. This is a factor of 10 better than other methods, and approaches the efficiency needed to make “green hydrogen” commercially viable.


  • A not-so private metaverse: Someone has built a “private metaverse” (hosted on a server somewhere for about $12/month) to display his art and to demonstrate that a metaverse can be open, and doesn’t have to be subject to land-grabs and rent-taking by large corporations.
  • Twitter has cut off API access for third party apps. This was a big mistake the first time (a decade ago); it’s an even bigger mistake now.
  • GoatCounter is an alternative to Google Analytics. It provides “privacy-friendly” web analytics. It can be self-hosted, or used as a service (free to non-commercial users).
  • Google is developing a free tool that websites can use to detect and remove material associated with terrorism, as an aid to help moderators.


  • Where do we go next with mRNA vaccines? Flu, Zika, HIV, cancer treatments? The vaccines are relatively easy to design and to manufacture.

Learn faster. Dig deeper. See farther.

Source link