Hacker, artist, maker that works for the Museum of Science in Boston.
586 stories
·
92 followers

Bitcoin power plant is turning a 12,000-year-old glacial lake into a hot tub

1 Comment and 3 Shares
In this aerial photo of Greenidge Generation's power plant outside Dresden, NY, Seneca Lake is visible in the background. The lake receives warm water from Greenidge's operations.

Enlarge / In this aerial photo of Greenidge Generation's power plant outside Dresden, NY, Seneca Lake is visible in the background. The lake receives warm water from Greenidge's operations. (credit: Greenidge Generation LLC)

The fossil fuel power plant that a private equity firm revived to mine bitcoin is at it again. Not content to just pollute the atmosphere in pursuit of a volatile crypto asset with little real-world utility, this experiment in free marketeering is also dumping tens of millions of gallons of hot water into glacial Seneca Lake in upstate New York.

“The lake is so warm you feel like you’re in a hot tub,” Abi Buddington, who lives near the Greenidge power plant, told NBC News.

In the past, nearby residents weren’t necessarily enamored with the idea of a pollution-spewing power plant warming their deep, cold water lake, but at least the electricity produced by the plant was powering their homes. Today, they’re lucky if a small fraction does. Most of the time, the turbines are burning natural gas solely to mint profits for the private equity firm Atlas Holdings by mining bitcoin.

Atlas, the firm that bought Greenidge has been ramping up its bitcoin mining aspirations over the last year and a half, installing thousands of mining rigs that have produced over 1,100 bitcoin as of February 2021. The company has plans to install thousands more rigs, ultimately using 85 MW of the station’s total 108 MW capacity. 

Seneca Lake’s water isn’t the only thing the power plant is warming. In December 2020, with the power plant running at just 13 percent of its capacity, Atlas’ bitcoin operations there produced 243,103 tons of carbon dioxide and equivalent greenhouse gases, a ten-fold increase from January 2020 when mining commenced. NOx pollution, which is responsible for everything from asthma, lung cancer, and premature death, also rose 10x.

The plant currently has a permit to emit 641,000 tons CO2e every year, though if Atlas wants to maximize its return on investment and use all 106 MW of the plant’s capacity, its carbon pollution could surge to 1.06 million tons per year. Expect NOx emissions—and health impacts—to rise accordingly. The project’s only tangible benefit (apart from dividends appearing in investors’ pockets) are the company’s claimed 31 jobs.

Sparkling specimen

The 12,000-year-old Seneca Lake is a sparkling specimen of the Finger Lakes region. It still boasts high water quality, clean enough to drink with just limited treatment. Its waters are home to a sizable lake trout population that’s large enough to maintain the National Lake Trout Derby for 57 years running. The prized fish spawn in the rivers that feed the lake, and it’s into one of those rivers—the Keuka Lake Outlet, known to locals for its rainbow trout fishing—that Greenidge dumps its heated water. 

Rainbow trout are highly sensitive to fluctuations in water temperature, with the fish happiest in the mid-50s. Because cold water holds more oxygen, as temps rise, fish become stressed. Above 70˚ F, rainbow trout stop growing and stressed individuals start dying. Experienced anglers don’t bother fishing when water temps get to that point.

Greenidge has a permit to dump 135 million gallons of water per day into the Keuka Lake Outlet as hot as 108˚ F in the summer and 86˚ F in the winter. New York’s Department of Environmental Conservation reports that over the last four years, the plant’s daily maximum discharge temperatures have averaged 98˚ in summer and 70˚ in winter. That water eventually makes its way to Seneca Lake, where it can result in tropical surface temps and harmful algal blooms. Residents say lake temperatures are already up, though a full study won't be completed until 2023.

Casting about for profits

Atlas, the private equity firm, bought the Greenidge power plant in 2014 and converted it from coal to natural gas. The firm initially intended it to be a peaker plant that would sell power to the grid when demand spiked. 

But in the three years that Atlas spent renovating the plant, the world changed. Natural gas, which was once viewed as a bridge fuel, is increasingly being seen as a dead end. Renewable sources like wind and solar continue to plunge in price, so much so that by 2019, the economics of power plants like Greenidge meant that 60% of them didn’t run more than six hours in a row. Today, renewable sources backed by batteries are cheaper than gas-powered peaker plants, and even batteries alone are threatening the fossil behemoths.

Though Atlas spent $60 million retrofitting the old coal plant to run on gas, it didn’t spring for the more advanced combined cycle technology, which would have helped it operate profitably as a peaker. In the search for higher returns, the company landed on bitcoin mining, Greenidge’s CEO told NBC. After a small test suggested that mining would be profitable, the firm plowed significant sums into the project. By the end of the year, Greenidge and Atlas plan to have 18,000 rigs mining at the site with another 10,500 on the horizon. When Atlas’ plans for Greenidge are complete, mining rigs will consume 79% of the plant’s rated capacity. 

Atlas won’t stop there, of course. The firm, through Greenidge Generation Holdings, will lease a building from a bankrupt book and magazine printer and convert it into a datacenter for cryptocurrency mining. Unlike the original Greenidge, this project doesn’t have onsite power, and Atlas claims it’ll use two-thirds “zero carbon” power from sources like nuclear. The rest? Fossil, most likely, and Atlas says it’ll offset emissions from both its Spartanburg and New York operations. But the company hasn’t said how, and many offset programs don’t reduce emissions as claimed.

As for the hot tub that Seneca Lake's residents say it's turning into? There’s no offset for that.

Read Comments

Read the whole story
jprodgers
29 days ago
reply
You don't need a general AI to make digital paperclips when plain old capitalism will do it for you. The experiment has failed, and it needs to be shut down.
Somerville, MA
ChrisDL
28 days ago
reply
New York
Share this story
Delete

How a Docker footgun led to a vandal deleting NewsBlur’s MongoDB database

6 Comments and 13 Shares

tl;dr: A vandal deleted NewsBlur’s MongoDB database during a migration. No data was stolen or lost.

I’m in the process of moving everything on NewsBlur over to Docker containers in prep for a big redesign launching next week. It’s been a great year of maintenance and I’ve enjoyed the fruits of Ansible + Docker for NewsBlur’s 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models). The day was wrapping up and I settled into a new book on how to tame the machines once they’re smarter than us when I received a strange NewsBlur error on my phone.

"query killed during yield: renamed collection 'newsblur.feed_icons' to 'newsblur.system.drop.1624498448i220t-1.feed_icons'"

There is honestly no set of words in that error message that I ever want to see again. What is drop doing in that error message? Better go find out.

Logging into the MongoDB machine to check out what state the DB is in and I come across the following…

nbset:PRIMARY> show dbs
READ__ME_TO_RECOVER_YOUR_DATA   0.000GB
newsblur                        0.718GB

nbset:PRIMARY> use READ__ME_TO_RECOVER_YOUR_DATA
switched to db READ__ME_TO_RECOVER_YOUR_DATA
    
nbset:PRIMARY> db.README.find()
{ 
    "_id" : ObjectId("60d3e112ac48d82047aab95d"), 
    "content" : "All your data is a backed up. You must pay 0.03 BTC to XXXXXXFTHISGUYXXXXXXX 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: FTHISGUY@recoverme.one and you will receive a link to download your database dump." 
}

Two thoughts immediately occured:

  1. Thank goodness I have some recently checked backups on hand
  2. No way they have that data without me noticing

Three and a half hours before this happened, I switched the MongoDB cluster over to the new servers. When I did that, I shut down the original primary in order to delete it in a few days when all was well. And thank goodness I did that as it came in handy a few hours later. Knowing this, I realized that the hacker could not have taken all that data in so little time.

With that in mind, I’d like to answer a few questions about what happened here.

  1. Was any data leaked during the hack? How do you know?
  2. How did NewsBlur’s MongoDB server get hacked?
  3. What will happen to ensure this doesn’t happen again?

Let’s start by talking about the most important question of all which is what happened to your data.

1. Was any data leaked during the hack? How do you know?

I can definitively write that no data was leaked during the hack. I know this because of two different sets of logs showing that the automated attacker only issued deletion commands and did not transfer any data off of the MongoDB server.

Below is a snapshot of the bandwidth of the db-mongo1 machine over 24 hours:

You can imagine the stress I experienced in the forty minutes between 9:35p, when the hack began, and 10:15p, when the fresh backup snapshot was identified and put into gear. Let’s breakdown each moment:

  1. 6:10p: The new db-mongo1 server was put into rotation as the MongoDB primary server. This machine was the first of the new, soon-to-be private cloud.
  2. 9:35p: Three hours later an automated hacking attempt opened a connection to the db-mongo1 server and immediately dropped the database. Downtime ensued.
  3. 10:15p: Before the former primary server could be placed into rotation, a snapshot of the server was made to ensure the backup would not delete itself upon reconnection. This cost a few hours of downtime, but saved nearly 18 hours of a day’s data by not forcing me to go into the daily backup archive.
  4. 3:00a: Snapshot completes, replication from original primary server to new db-mongo1 begins. What you see in the next hour and a half is what the transfer of the DB looks like in terms of bandwidth.
  5. 4:30a: Replication, which is inbound from the old primary server, completes, and now replication begins outbound on the new secondaries. NewsBlur is now back up.

The most important bit of information the above chart shows us is what a full database transfer looks like in terms of bandwidth. From 6p to 9:30p, the amount of data was the expected amount from a working primary server with multiple secondaries syncing to it. At 3a, you’ll see an enormous amount of data transfered.

This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

We can also reason that the vandal was not able to access any files that were on the server outside of MongoDB due to using a recent version of MongoDB in a Docker container. Unless the attacker had access to a 0-day to both MongoDB and Docker, it is highly unlikely they were able to break out of the MongoDB server connection.

While the server was being snapshot, I used that time to figure out how the hacker got in.

2. How did NewsBlur’s MongoDB server get hacked?

Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn’t work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was “active”, doing a sudo iptables -L | grep 27017 showed that MongoDB was open the world. This has been a Docker footgun since 2014.

To be honest, I’m a bit surprised it took over 3 hours from when I flipped the switch to when a hacker/vandal dropped NewsBlur’s MongoDB collections and pretended to ransom about 250GB of data. This is the work of an automated hack and one that I was prepared for. NewsBlur was back online a few hours later once the backups were restored and the Docker-made hole was patched.

It would make for a much more dramatic read if I was hit through a vulnerability in Docker instead of a footgun. By having Docker silently override the firewall, Docker has made it easier for developers who want to open up ports on their containers at the expense of security. Better would be for Docker to issue a warning when it detects that the most popular firewall on Linux is active and filtering traffic to a port that Docker is about to open.

The second reason we know that no data was taken comes from looking through the MongoDB access logs. With these rich and verbose logging sources we can invoke a pretty neat command to find everybody who is not one of the 100 known NewsBlur machines that has accessed MongoDB.


$ cat /var/log/mongodb/mongod.log | egrep -v "159.65.XX.XX|161.89.XX.XX|<< SNIP: A hundred more servers >>"

2021-06-24T01:33:45.531+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26003 #63455699 (1189 connections now open)
2021-06-24T01:33:45.635+0000 I NETWORK  [conn63455699] received client metadata from 171.25.193.78:26003 conn63455699: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.010+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26557 #63455724 (1189 connections now open)
2021-06-24T01:33:46.092+0000 I NETWORK  [conn63455724] received client metadata from 171.25.193.78:26557 conn63455724: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.500+0000 I NETWORK  [conn63455724] end connection 171.25.193.78:26557 (1198 connections now open)
2021-06-24T01:33:46.533+0000 I NETWORK  [conn63455699] end connection 171.25.193.78:26003 (1200 connections now open)
2021-06-24T01:34:06.533+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:10056 #63456621 (1266 connections now open)
2021-06-24T01:34:06.627+0000 I NETWORK  [conn63456621] received client metadata from 185.220.101.6:10056 conn63456621: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:06.890+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:21642 #63456637 (1264 connections now open)
2021-06-24T01:34:06.962+0000 I NETWORK  [conn63456637] received client metadata from 185.220.101.6:21642 conn63456637: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - starting
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping 1 collections
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping collection: config.transactions
2021-06-24T01:34:08.020+0000 I STORAGE  [conn63456637] dropCollection: config.transactions (no UUID) - renaming to drop-pending collection: config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 }
2021-06-24T01:34:08.029+0000 I REPL     [replication-14545] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 1), t: -1 })
2021-06-24T01:34:08.030+0000 I STORAGE  [replication-14545] Finishing collection drop for config.system.drop.1624498448i1t-1.transactions (no UUID).
2021-06-24T01:34:08.030+0000 I COMMAND  [conn63456637] dropDatabase config - successfully dropped 1 collections (most recent drop optime: { ts: Timestamp(1624498448, 1), t: -1 }) after 7ms. dropping database
2021-06-24T01:34:08.032+0000 I REPL     [replication-14546] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 5), t: -1 })
2021-06-24T01:34:08.041+0000 I COMMAND  [conn63456637] dropDatabase config - finished
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - starting
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - dropping 37 collections

<< SNIP: It goes on for a while... >>

2021-06-24T01:35:18.840+0000 I COMMAND  [conn63456637] dropDatabase newsblur - finished

The above is a lot, but the important bit of information to take from it is that by using a subtractive filter, capturing everything that doesn’t match a known IP, I was able to find the two connections that were made a few seconds apart. Both connections from these unknown IPs occured only moments before the database-wide deletion. By following the connection ID, it became easy to see the hacker come into the server only to delete it seconds later.

Interestingly, when I visited the IP address of the two connections above, I found a Tor exit router:

This means that it is virtually impossible to track down who is responsible due to the anonymity-preserving quality of Tor exit routers. Tor exit nodes have poor reputations due to the havoc they wreak. Site owners are split on whether to block Tor entirely, but some see the value of allowing anonymous traffic to hit their servers. In NewsBlur’s case, because NewsBlur is a home of free speech, allowing users in countries with censored news outlets to bypass restrictions and get access to the world at large, the continuing risk of supporting anonymous Internet traffic is worth the cost.

3. What will happen to ensure this doesn’t happen again?

Of course, being in support of free speech and providing enhanced ways to access speech comes at a cost. So for NewsBlur to continue serving traffic to all of its worldwide readers, several changes have to be made.

The first change is the one that, ironically, we were in the process of moving to. A VPC, a virtual private cloud, keeps critical servers only accessible from others servers in a private network. But in moving to a private network, I need to migrate all of the data off of the publicly accessible machines. And this was the first step in that process.

The second change is to use database user authentication on all of the databases. We had been relying on the firewall to provide protection against threats, but when the firewall silently failed, we were left exposed. Now who’s to say that this would have been caught if the firewall failed but authentication was in place. I suspect the password needs to be long enough to not be brute-forced, because eventually, knowing that an open but password protected DB is there, it could very possibly end up on a list.

Lastly, a change needs to be made as to which database users have permission to drop the database. Most database users only need read and write privileges. The ideal would be a localhost-only user being allowed to perform potentially destructive actions. If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.

But each of these is only one piece of a defense strategy. As this well-attended Hacker News thread from the day of the hack made clear, a proper defense strategy can never rely on only one well-setup layer. And for NewsBlur that layer was a allowlist-only firewall that worked perfectly up until it didn’t.

As usual the real heros are backups. Regular, well-tested backups are a necessary component to any web service. And with that, I’ll prepare to launch the big NewsBlur redesign later this week.

Read the whole story
samuel
37 days ago
reply
What a week. In other news, new blog design launched!
Cambridge, Massachusetts
deezil
36 days ago
Thanks for being above-board with all this! The HackerNews comment section was a little brutal towards you about some things, but I like that you've been transparent about everything.
samuel
36 days ago
HN only knows how to be brutal, which I always appreciate.
acdha
35 days ago
Thanks for writing this up. That foot-gun really needs fixing.
jprodgers
36 days ago
reply
Somerville, MA
popular
36 days ago
reply
Share this story
Delete
5 public comments
seriousben
35 days ago
reply
Great root cause analysis of a security incident.
Canada
chrisrosa
36 days ago
reply
Great write up Samuel. And kudos for your swift and effective response.
San Francisco, CA
jshoq
36 days ago
reply
This is a great account on how to recover a service from a major outage. In this case, NewsBlur was attacked by a scripter that used a well known hole to attack the system. In this case, a well planned and validated backup setup helped NewsBlur to get their service back online quickly. This is a great read of a blameless post mortem executed well.
JS
Seattle, WA
jqlive
37 days ago
reply
Thanks for the write up, it was interesting to read and very transparent of you. It would be an interesting read to know how you'll be applying ML Models to Newsblur.
CN/MX
BLueSS
37 days ago
reply
Thanks, Samuel, for your hard work and efforts keeping NewsBlur alive!

Linkdump: May 2021

1 Comment

Read the whole story
jprodgers
77 days ago
reply
The first link blew me away, but these monthly link dumps are always worth checking out.
Somerville, MA
Share this story
Delete

Coding on Raspberry Pi remotely with Visual Studio Code

1 Comment

Jim Bennett from Microsoft, who showed you all how to get Visual Studio Code up and running on Raspberry Pi last week, is back to explain how to use VS Code for remote development on a headless Raspberry Pi.

Like a lot of Raspberry Pi users, I like to run my Raspberry Pi as a ‘headless’ device to control various electronics – such as a busy light to let my family know I’m in meetings, or my IoT powered ugly sweater.

The upside of headless is that my Raspberry Pi can be anywhere, not tied to a monitor, keyboard and mouse. The downside is programming and debugging it – do you plug your Raspberry Pi into a monitor and run the full Raspberry Pi OS desktop, or do you use Raspberry Pi OS Lite and try to program and debug over SSH using the command line? Or is there a better way?

Remote development with VS Code to the rescue

There is a better way – using Visual Studio Code remote development! Visual Studio Code, or VS Code, is a free, open source, developer’s text editor with a whole swathe of extensions to support you coding in multiple languages, and provide tools to support your development. I practically live day to day in VS Code: whether I’m writing blog posts, documentation or Python code, or programming microcontrollers, it’s my work ‘home’. You can run VS Code on Windows, macOS, and of course on a Raspberry Pi.

One of the extensions that helps here is the Remote SSH extension, part of a pack of remote development extensions. This extension allows you to connect to a remote device over SSH, and run VS Code as if you were running on that remote device. You see the remote file system, the VS Code terminal runs on the remote device, and you access the remote device’s hardware. When you are debugging, the debug session runs on the remote device, but VS Code runs on the host machine.

Photograph of Raspberry Pi 4
Raspberry Pi 4

For example – I can run VS Code on my MacBook Pro, and connect remotely to a Raspberry Pi 4 that is running headless. I can access the Raspberry Pi file system, run commands on a terminal connected to it, access whatever hardware my Raspberry Pi has, and debug on it.

Remote SSH needs a Raspberry Pi 3 or 4. It is not supported on older Raspberry Pis, or on Raspberry Pi Zero.

Set up remote development on Raspberry Pi

For remote development, your Raspberry Pi needs to be connected to your network either by ethernet or WiFi, and have SSH enabled. The Raspberry Pi documentation has a great article on setting up a headless Raspberry Pi if you don’t already know how to do this.

You also need to know either the IP address of the Raspberry Pi, or its hostname. If you don’t know how to do this, it is also covered in the Raspberry Pi documentation.

Connect to the Raspberry Pi from VS Code

Once the Raspberry Pi is set up, you can connect from VS Code on your Mac or PC.

First make sure you have VS Code installed. If not, you can install it from the VS Code downloads page.

From inside VS Code, you will need to install the Remote SSH extension. Select the Extensions tab from the sidebar menu, then search for Remote development. Select the Remote Development extension, and select the Install button.

Next you can connect to your Raspberry Pi. Launch the VS Code command palette using Ctrl+Shift+P on Linux or Windows, or Cmd+Shift+P on macOS. Search for and select Remote SSH: Connect current window to host (there’s also a connect to host option that will create a new window).

Enter the SSH connection details, using user@host. For the user, enter the Raspberry Pi username (the default is pi). For the host, enter the IP address of the Raspberry Pi, or the hostname. The hostname needs to end with .local, so if you are using the default hostname of raspberrypi, enter raspberrypi.local.

The .local syntax is supported on macOS and the latest versions of Windows or Linux. If it doesn’t work for you then you can install additional software locally to add support. On Linux, install Avahi using the command sudo apt-get install avahi-daemon. On Windows, install either Bonjour Print Services for Windows, or iTunes for Windows.

For example, to connect to my Raspberry Pi 400 with a hostname of pi-400 using the default pi user, I enter pi@pi-400.local.

The first time you connect, it will validate the fingerprint to ensure you are connecting to the correct host. Select Continue from this dialog.

Enter your Raspberry Pi’s password when promoted. The default is raspberry, but you should have changed this (really, you should!).

VS Code will then install the relevant tools on the Raspberry Pi and configure the remote SSH connection.

Code!

You will now be all set up and ready to code on your Raspberry Pi. Start by opening a folder or cloning a git repository and away you go coding, debugging and deploying your applications.

In the remote session, not all extensions you have installed locally will be available remotely. Any extensions that change the behavior of VS Code as an application, such as themes or tools for managing cloud resources, will be available.

Things like language packs and other programming tools are not installed in the remote session, so you’ll need to re-install them. When you install these extensions, you’ll see the Install button has changed to Install in SSH:< hostname > to show it’s being installed remotely.

VS Code may seem daunting at first – it’s a powerful tool with a huge range of extensions. The good news is Microsoft has you covered with lots of hands-on, self-guided learning guides on how to use it with different languages and development tools, from using Git version control, to developing web applications. There’s even a guide to learning Python basics with Wonder Woman!

Jim with his arms folded wearing a dark t shirt
Jim Bennett

You remember Jim – his blog Expecting Someone Geekier is well good. You can find him on Twitter @jimbobbennett and on github.

The post Coding on Raspberry Pi remotely with Visual Studio Code appeared first on Raspberry Pi.

Read the whole story
jprodgers
165 days ago
reply
VS Code has been my IDE for years now, and I love to see how much work they've put into making it really work on the Raspberry PI. The PI is going to be a solid development platform quite quickly I imagine, kind of already is with this.
Somerville, MA
lousyd
165 days ago
But... why? You could develop on so many other things. And you'd choose the Pi? I mean... I wouldn't.
jprodgers
109 days ago
@lousyd I didn't say this was the best development platform, but it is extremely capable for what it is right out of the box. The latest PI 4 with 8GB of ram runs well enough, but MS has put some effort into optimizing VS Code for the PI, and it is a fully functional IDE. The toolchains for the PICO, and Arduino are single line installs. I could probably set an entire PI 4 environment in the time it takes me to pull down all the things I'd need on a Win10 or Mac computer. Also, no problem running KiCAD, so you could also make your PCBs on the same computer for a very low cost. What they've accomplished here is truly remarkable. Lots of kids will be getting into developing their own projects cheaply and quickly.
Share this story
Delete

NeoPixel dithering with Pico

1 Comment

In the extra special Raspberry Pi Pico launch issue of HackSpace magazine, editor Ben Everard shows you how to get extra levels of brightness out of your LEDs with our new board.

WS2812B LEDs, commonly known as NeoPixels, are cheap and widely available LEDs. They have red, green, and blue LEDs in a single package with a microcontroller that lets you control a whole string of them using just one pin on your microcontroller.

The three connections may be in a different order on your LED strip, so check the labels to make sure they’re connected correctly
The three connections may be in a different order on your LED strip, so check the labels to make sure they’re connected correctly

However, they do have a couple of disadvantages:

1) The protocol needed to control them is timing-dependent and often has to be bit-banged.

2) Each colour has 8 bits, so has 255 levels of brightness. However, these aren’t gamma-corrected, so the low levels of brightness have large steps between them. For small projects, we often find ourselves only using the lower levels of brightness, so often only have 10 or 20 usable levels of brightness.

There will usually be wires already connected to your strip, but if you cut it, you’ll need to solder new wires on
There will usually be wires already connected to your strip, but if you cut it, you’ll need to solder new wires on

We’re going to look at how two features of Pico help solve these problems. Firstly, Programmable I/O (PIO) lets us implement the control protocol on a state machine rather than the main processing cores. This means that we don’t have to dedicate any processor time to sending the data out. Secondly, having two cores means we can use one of the processing cores to dither the NeoPixels. This means shift them rapidly between different brightness levels to make pseudo-levels of brightness.

For example, if we wanted a brightness level halfway between levels 3 and 4, we’d flick the brightness back and forth between 3 and 4. If we can do this fast enough, our eyes blur this into a single brightness level and we don’t see the flicker. By varying the amount of time at levels 3 and 4, we can make many virtual levels of brightness. While one core is doing this, we still have a processing core completely free to manipulate the data we want to display.

First, we’ll need a PIO program to communicate with the WS2812B LEDs. The Pico development team have provided an example PIO program to work with – you can see the full details here, but we’ll cover the essentials here. The PIO code is:

.program ws2812
.side_set 1
.define public T1 2
.define public T2 5
.define public T3 3
bitloop:
 out x, 1 side 0 [T3 - 1]
 jmp !x do_zero side 1 [T1 - 1]
 do_one:
 jmp bitloop side 1 [T2 - 1]
 do_zero:
 nop side 0 [T2 - 1]

We looked at the PIO syntax in the main cover feature, but it’s basically an assembly language for the PIO state machine. The WS2812B protocol uses pulses at a rate of 800kHz, but the length of the pulse determines if a 1 or a 0 is being sent. This code uses jumps to move through the loop to set the timings depending on whether the bit (stored in the register x) is 0 or 1. The T1, T2, and T3 variables hold the timings, so are used to calculate the delays (with 1 taken off as the instruction itself takes one clock cycle). There’s also a section in the pio file that links the PIO code and the C code:

% c-sdk {
#include "hardware/clocks.h"
static inline void ws2812_program_init(PIO pio,
uint sm, uint offset, uint pin, float freq, bool
rgbw) {
 pio_gpio_select(pio, pin);
 pio_sm_set_consecutive_pindirs(pio, sm, pin, 1,
true);
 pio_sm_config c = ws2812_program_get_default_
config(offset);
 sm_config_set_sideset_pins(&c, pin);
 sm_config_set_out_shift(&c, false, true, rgbw ?
32 : 24);
 sm_config_set_fifo_join(&c, PIO_FIFO_JOIN_TX);
 int cycles_per_bit = ws2812_T1 + ws2812_T2 +
ws2812_T3;
 float div = clock_get_hz(clk_sys) / (freq *
cycles_per_bit);
 sm_config_set_clkdiv(&c, div);
 pio_sm_init(pio, sm, offset, &c);
 pio_sm_set_enable(pio, sm, true);
}
%}

Most of this is setting the various PIO options – the full range is detailed in the Raspberry Pi Pico C/C++ SDK document.

 sm_config_set_out_shift(&c, false, true, rgbw ? 32
: 24);

This line sets up the output shift register which holds each 32 bits of data before it’s moved bit by bit into the PIO state machine. The parameters are the config (that we’re setting up and will use to initialise the state machine); a Boolean value for shifting right or left (false being left); and a Boolean value for autopull which we have set to true. This means that whenever the output shift register falls below a certain threshold (set in the next parameter), the PIO will automatically pull in the next 32 bits of data.

Using a text editor with programmer’s features such as syntax highlighting will make the job a lot easier
Using a text editor with programmer’s features such as syntax highlighting will make the job a lot easier

The final parameter is set using the expression rgbw ? 32 : 24. This means that if the variable rgbw is true, the value 32 is passed, otherwise 24 is passed. The rbgw variable is passed into this function when we create the PIO program from our C program and is used to specify whether we’re using an LED strip with four LEDs in each (using one red, one green, one blue, and one white) or three (red, green, and blue).

The PIO hardware works on 32-bit words, so each chunk of data we write with the values we want to send to the LEDs has to be 32 bits long. However, if we’re using RGB LED strips, we actually want to work in 24-bit lengths. By setting autopull to 24, we still pull in 32 bits each time, but once 24 bits have been read, another 32 bits are pulled in which overwrite the remaining 8 bits.

sm_config_set_fifo_join(&c, PIO_FIFO_JOIN_TX);

Each state machine has two four-word FIFOs attached to it. These can be used for one going in and one coming out. However, as we only have data going into our state machine, we can join them together to form a single eight-word FIFO using the above line. This gives us a small buffer of time to write data to in order to avoid the state machine running out of data and execution stalling. The following three lines are used to set the speed the state machine runs at:

int cycles_per_bit = ws2812_T1 + ws2812_T2 +
ws2812_T3;
 float div = clock_get_hz(clk_sys) / (freq *
cycles_per_bit);
 sm_config_clkdiv(&c, div);

The WS2812B protocol demands that data is sent out at a rate of 800kHz. However, each bit of data requires a number of state machine cycles. In this case, they’re defined in the variables T1, T2, and T3. If you look back at the original PIO program, you’ll see that these are used in the delays (always with 1 taken off the value because the initial instruction takes one cycle before the delay kicks in). Every loop of the PIO program will take T1 + T2 + T3 cycles. We use these values to calculate the speed we want the state machine to run at, and from there we can work out the divider we need to slow the system clock down to the right speed for the state machine. The final two lines just initialise and enable the state machine.

The main processor

That’s the code that’s running on the state machine, so let’s now look at the code that’s running on our main processor cores. The full code is on github. Let’s first look at the code running on the second core (we’ll look at how to start this code running shortly), as this controls the light levels of the LEDs.

static inline void put_pixel(uint32_t pixel_grb) {
 pio_sm_put_blocking(pio0, 0, pixel_grb << 8u);
}
static inline uint32_t urgb_u32(uint8_t r, uint8_t
g, uint8_t b) {
 return
 ((uint32_t) (r) << 8) |
 ((uint32_t) (g) << 16) |
 (uint32_t) (b);
}
void ws2812b_core() {
int valuer, valueg, valueb;
int shift = bit_depth-8;
 while (1){

for(int i=0; i<STRING_LEN; i++) {
valueb=(pixelsb[i] + errorsb[i]) >> shift;
valuer=(pixelsr[i] + errorsr[i]) >> shift;
valueg=(pixelsg[i] + errorsg[i]) >> shift;
put_pixel(urgb_u32(valuer, valueg, valueb));
errorsb[i] = (pixelsb[i] + errorsb[i]) -
(valueb << shift);
errorsr[i] = (pixelsr[i] + errorsr[i]) -
(valuer << shift);
errorsg[i] = (pixelsg[i] + errorsg[i]) -
(valueg << shift);
 }
sleep_us(400);
}
}

We start by defining a virtual bit depth. This is how many bits per pixel you can use. Our code will then attempt to create the necessary additional brightness levels. It will run as fast as it can drive the LED strip, but if you try to do too many brightness levels, you’ll start to notice flickering.

We found twelve to be about the best with strings up to around 100 LEDs, but you can experiment with others. Our code works with two arrays – pixels which holds the values that we want to display, and errors which holds the error in what we’ve displayed so far (there are three of each for the different colour channels).

If you just want to see this in action, you can download the UF2 file from hsmag.cc/orfgBD and flash it straight to your Pico
If you just want to see this in action, you can download the UF2 file from hsmag.cc/orfgBD and flash it straight to your Pico

To explain that latter point, let’s take a look at the algorithm for determining how to light the LED. We borrowed this from the source code of Fadecandy by Micah Scott, but it’s a well-used algorithm for calculating error rates. We have an outer while loop that just keeps pushing out data to the LEDs as fast as possible. We don’t care about precise timings and just want as much speed as possible. We then go through each pixel.

The corresponding item in the errors array holds the cumulative amount our LED has been underlit so far compared to what we want it to be. Initially, this will be zero, but with each loop (if there’s a difference between what we want to light the LED and what we can light the LED) this error value will increase. These two numbers (the closest light level and the error) added together give the brightness at the pseudo-level, so we need to bit-shift this by the difference between our virtual level and the 8-bit brightness levels that are available.

This gives us the value for this pixel which we write out. We then need to calculate the new error level. Let’s take a look at what this means in practice. Suppose we want a brightness level halfway between 1 and 2 in the 8-bit levels. To simplify things, we’ll use nine virtual bits. 1 and 2 in 8-bit is 2 and 4 in 9 bits (adding an extra 0 to the end multiplies everything by a power of 2), so halfway between these two is a 9-bit value of 3 (or 11 in binary, which we’ll use from now on).

In the first iteration of our loop, pixels is 11, errors is 0, and shift is 1.

value = 11 >> 1 = 1
errors = 11 – 10 = 1

So this time, the brightness level of 1 is written out. The second iteration, we have:

value = 100 >> 1 = 10
errors = 100 – 100 = 0

So this time, the brightness level of 10 (in binary, or 2 in base 10) is written out. This time, the errors go back to 0, so we’re in the same position as at the start of the first loop. In this case, the LED will flick between the two brightness levels each loop so you’ll have a brightness half way between the two.

Using this simple algorithm, we can experiment with different virtual bit-depths. The algorithm will always handle the calculations for us, but we just have to see what creates the most pleasing visual effect for the eye. The larger the virtual bit depth, the more potential iterations you have to go through before the error accumulates enough to create a correction, so the more likely you are to see flicker. The biggest blocker to increasing the virtual bit depth is the sleep_us(400). This is needed to reset the LED strip.

NeoPixels come in many different shapes and sizes

Essentially, we throw out bits at 800kHz, and each block of 24 bits is sent, in turn, to the next LED. However, once there’s a long enough pause, everything resets and it goes back to the first LED. How big that pause is can vary. The truth is that a huge proportion of WS2812B LEDs are clones rather than official parts – and even for official parts, the length of the pause needed to reset has changed over the years.

400 microseconds is conservative and should work, but you may be able to get away with less (possibly even as low as 50 microseconds for some LEDs). The urgb_u32 method simply amalgamates the red, blue, and green values into a single 32-bit string (well, a 24-bit string that’s held inside a 32-bit string), and put_pixel sends this to the state machine. The bit shift there is to make sure the data is in the right place so the state machine reads the correct 24 bits from the output shift register.

Getting it running

We’ve now dealt with all the mechanics of the code. The only bit left is to stitch it all together.

int main() {
 PIO pio = pio0;
 int sm = 0;
 uint offset = pio_add_program(pio, &ws2812_
program);
 ws2812_program_init(pio, sm, offset, PIN_TX,
1000000, false);
 multicore_launch_core1(ws2812b_core);

 while (1) {
 for (int i = 0; i < 30; ++i) {
pixels[i] = i;

for (int j=0;j<30;++j){
 pixels[0] = j;
 if(j%8 == 0) { pixels[1] = j; }
 sleep_ms(50);
 }
 for (int j=30;j>0;--j){
 pixels[0] = j;
 if(j%8 == 0) { pixels[1] = j; }
 sleep_ms(50);
 }
 }
 } }

The method ws2812_program_init calls the method created in the PIO program to set everything up. To launch the algorithm creating the virtual bit-depth, we just have to use multicore_launch_core1 to set a function running on the other core. Once that’s done, whatever we put in the pixels array will be reflected as accurately as possible in the WS2812B LEDs. In this case, we simply fade it in and out, but you could do any animation you like.

Get a free Raspberry Pi Pico

Would you like a free Raspberry Pi Pico? Subscribe to HackSpace magazine via your preferred option here, and you’ll receive your new microcontroller in the mail before the next issue arrives.

The post NeoPixel dithering with Pico appeared first on Raspberry Pi.

Read the whole story
jprodgers
194 days ago
reply
What a fantastic example of the power of the PIO, and it's also a great initial programming example, as neopixels are very popular. They are knocking this out of the park, hopefully production can meet the demand.
Somerville, MA
Share this story
Delete

Raspberry Pi Enters Microcontroller Game with $4 Pico

1 Comment

Raspberry Pi was synonymous with single-board Linux computers. No longer. The $4 Raspberry Pi Pico board is their attempt to break into the crowded microcontroller module market.

The microcontroller in question, the RP2040, is also Raspberry Pi’s first foray into custom silicon, and it’s got a dual-core Cortex M0+ with luxurious amounts of SRAM and some very interesting custom I/O peripheral hardware that will likely mean that you never have to bit-bang again. But a bare microcontroller is no fun without a dev board, and the Raspberry Pi Pico adds 2 MB of flash, USB connectivity, and nice power management.

As with the Raspberry Pi Linux machines, the emphasis is on getting you up and running quickly, and there is copious documentation: from “Getting Started” type guides for both the C/C++ and MicroPython SDKs with code examples, to serious datasheets for the Pico and the RP2040 itself, to hardware design notes and KiCAD breakout boards, and even the contents of the on-board Boot ROM. The Pico seems designed to make a friendly introduction to microcontrollers using MicroPython, but there’s enough guidance available for you to go as deep down the rabbit hole as you’d like.

Our quick take: the RP2040 is a very well thought-out microcontroller, with myriad nice design touches throughout, enough power to get most jobs done, and an innovative and very hacker-friendly software-defined hardware I/O peripheral. It’s backed by good documentation and many working examples, and at the end of the day it runs a pair of familiar ARM MO+ CPU cores. If this hits the shelves at the proposed $4 price, we can see it becoming the go-to board for many projects that don’t require wireless connectivity.

But you want more detail, right? Read on.

The Specs and the Luxuries

In many ways, the Pico is a well-appointed “normal” microcontroller board. It has 26 3.3 V GPIOs, a standard ARM Serial Wire Debug (SWD) port, USB host or device capabilities, two UARTs, two I2Cs, two SPIs, and 16 PWM channels in eight groups. (The PWM unit can also measure incoming PWM signals, both their frequency and duty cycle.) The Pico has a 12-bit ADC, although it’s connected to only four three pins, so you’ve got to be a little careful there. (Ed note: the RP2040 has four ADCs, but the fourth isn’t available on the Pico.)

The twin ARM M0+ cores run off of PLLs, and are specced up to 133 MHz, which is pretty fast. There are separate clock dividers for nearly every peripheral, and they can be turned on and off individually for power savings, as with most other ARM microcontrollers. It runs full-out at around 100 mA @ 5 V, and has full-memory-retention sleep modes under 1 mA.

As the ESP8266 and ESP32 modules do, it uses external flash ROM to store programs, and can run code directly from the flash as if it were internal memory. The Pico board comes with a decent 2 MB QSPI flash chip, but if you’re handy with a soldering iron, you can fit up to 16 MB. It has 264 kB of SRAM, which is certainly comfy. The RAM is divided up internally into four striped 64 kB banks for fast parallel access, but they’re also accessible singly if you’d like. Two additional 4 kB banks are non-striped and suggest using themselves as per-core stack memory, but nothing forces you to use them that way either.

There are numerous minor hardware-level conveniences. All of the configuration registers are 32 bits wide, and so you might not want to have to specify all of them, or maybe you want to avoid the read-modify-write dance. Like many of the STM32 chips, there is a special memory map that lets you set, clear, or XOR any bit in any of the config registers in a single atomic command. There are also 30 GPIOs, so they all fit inside a single 32-bit register — none of this Port B, Pin 7 stuff. This also means that you can read or write them all at once, while setting individual pins is easy through the above atomic access.

An internal mask ROM contains the UF2 bootloader, which means that you can always get the chip back under control. When you plug the Pico in holding down the BOOTSEL button, it shows up as a USB mass storage device, and you can just copy your code across, with no programmer, and Raspberry even provides an all-zeros file that you can copy across to completely clean-slate the machine. If you copy the Pico’s MicroPython binary across, however, you’ll never need the bootloader again. The mask ROM also contains some fast routines that support integer and floating point math, and all of the contents are open source as mentioned above.

The power regulation onboard is a boost-buck configuration that takes an input from 1.8 V to 5 V. This is a good range for lithium batteries, for instance, which can be a hassle because they run both above and below the IC’s 3.3 V, so it’s nice to have a boost-buck regulator to squeeze out the last few milliamp-hours. Or you could run your project on two AAs. That’s nice.

So the Pico/RP2040 is a competent modern dev board with some thoughtful touches. But it gets better.

The PIO: Never Bitbang Again

The real standout peripheral on the RP2040 and the Pico is the Programmable I/O (PIO) hardware, which allows you to define your own digital communication peripheral. There are two of these PIO units, and each one has four programmable state machines that run sequential programs written in a special PIO assembly language. Each of the state machines has its own clock divider, register memory, IRQ flags, DMA interface, and GPIO mapping. These allow essentially arbitrary cycle-accurate I/O, doing the heavy lifting so that the CPU doesn’t have to.

If you want to program another UART, for instance, it’s trivial. But so is Manchester-encoded UART, or a grey code encoder/decoder, or even fancier tricks. One of the example applications is a DPI video example, with one state machine handling the scanline timing and pixel clock, while another pushes out the pixel data and run-length encodes it. These are the sort of simple-but-fast duties that can bog down a CPU, leading to timing glitches, so dedicated hardware is the right solution.

The PIOs are meant to have a lot of the flexibility of a CPLD or FPGA, but be easier to program. Each state machine can only take a “program” that is 32 instructions long, but the “pioasm” language is very dense. For instance, the command to set pin states also has an argument that says how long to wait after the pins are set, and additional “side-set” pins can be twiddled in the same instruction. So with one instruction you can raise a clock line, set up your data, and hold this state for a defined time. A basic SPI master TX implementation is two lines.

Or take the example of the WS2812 LED protocol. To send a logical 1, you hold the self-clocked data line high for a long period and low for a short period. To send a logical 0, the data line is held high for a short period and low for a long one. Creating the routines to do this with reasonable speed in the CPU, without glitches, required a non-trivial shedding of hacker tears. With the PIO peripheral, writing a routine to shift out these bits with absolute cycle accuracy is simple, and once that’s done your code can simply write RGB values to the PIO and the rest is taken care of.

To run PIO code from C, the assembler is called at compile time, the program is turned into machine language and stored as a matrix in a header file, and then this can be written to the PIO device from within main() to initialize it. In Python, it’s even easier — the @asm_pio decorator turns a function into PIO code. You just have to write the “Python” function using the nine PIO assembly instructions and then hook it up to GPIO pins. After that, you can call it from your code as if it were a normal peripheral.

Having played around with it only a little bit, the PIO is the coolest feature of the Pico/RP2040. It’s just a little bit of cycle-correct programmable logic, but most of the time, that’s all you need. And if you don’t feel like learning a new assembly language — although it’s only nine instructions — there are a heaping handful of examples already, and surely folks will develop more once the boards hit the streets.

IDEs and SDKs: C and MicroPython

The Raspberry Pi single-board computers (SBCs), when combined with their documentation and examples, usually manage a nice blend of being simple enough for the newbie while at the same time not hiding too much. The result is that, rather than having the underlying system’s Linuxiness abstracted away, you get introduced to it in a friendly way. That seems to be what the Raspberries are aiming at with the Pico — an introduction to microcontrollers that’s made friendly through documentation and MicroPython’s ease of use, but that’s also not pulling any punches when you turn to look at the C/C++ code.

USB for power, UART for communication, and SWD for programming and debugging.

And having a Raspberry Pi SBC on hand makes a lot of the most hardcore microcontrollering simpler. For instance, if you want to do debugging on-chip, you’ll need to connect over the SWD interface, and for that you usually need a programmer. But of course, you can also bit-bang a SWD controller with the GPIOs of a Raspberry Pi SBC, but you’ll have to configure OpenOCD just right to do so.

If that all sounded like gibberish, don’t worry — all of this is taken care of by a simple pico_setup.sh script. It not only installs all of the compilation and debugging environment, it also (optionally) pulls down VScode for you. Nice.

And you will want to program it over the SWD eventually. The cycle of unplugging USB, holding down a button, and re-plugging USB gets old real fast.

If you’re a command-line junkie, the C SDK’s build system is based on CMake and runs just fine from the command line if you’ve already got the ARM toolchain installed. And as with all SDKs, there’s a certain amount of boilerplate necessary to start up a new coding session. This is taken care of by the pico project generator, so you don’t have to.

In the “Getting started” guides, you’ll find instructions for setting up your environment on a Raspberry Pi SBC, Windows, Mac, or desktop Linux machine. If you prefer Eclipse as an IDE, there are integration instructions as well.

Two Cores: Here be Dragons

If there’s one area that strikes me as not yet fully developed, it’s the dual-core aspect of the system. Right now, if you write either C or Python code, it’s running on Core 0, while Core 1 is simply sitting idle. Both the C and Python SDK documentation tell you how to start up a thread on the other core, and there’s example code available as well, but the instructions are sparse. In C, there’s a pico/multicore.h and even mutex, semaphore, and queue libs for you to include, but the documentation warns that most of the stdlib functions aren’t thread-safe. In Python you import _thread and call the start_new_thread() method, but I don’t know how much fine-grained control you have.

If all of the above sounds scary, well, it is a little. The truth about coding for multiprocessor systems is that it opens up new ways for things to go wrong, as one CPU changes values out from under the other, or they both try to write out to the UART at the same time. We wrote the Raspberries and asked if they were planning to port over an RTOS, which provides a little more structure to the problem, and they replied that that was actually first on their plate after they get through the release. So unless you know what you’re doing, you might not get the full benefit of the dual-core chip just yet. But we’re honestly looking forward to an RTOS getting the Raspberry Pi documentation-and-tutorial treatment when it happens.

Deep Thoughts

It’s not every day that you see a new player enter the microcontroller market, let alone one with the hacker-friendly qualifications of Raspberry Pi. For that alone, this board is notable. But the feature set is also solid, there are many creature comforts in both the silicon and the support, and it brings one truly new capability to the table in the form of the PIO units. Add to all this a price tag of $4, and you can imagine it becoming folks’ go-to board — for those times when you don’t need wireless connectivity.

Indeed, the only real competitor for this board in terms of price/performance ratio are the various ESP32 boards. But they’re also very different animals — one offers fewer GPIOs but has extensive wireless features, and the other has more (and more flexible) GPIO, device and host USB, but no radio. Power consumption while running full-out, with wireless turned off, is a slight advantage for the ESP32, but the sleep modes of the Pico are slightly thriftier. Both SDKs get the job done in C, and both run MicroPython. ESP32’s dual cores run FreeRTOS, but we imagine it won’t be very long before that playing field is levelled. So basically it’s down to WiFi vs USB.

Of course, for slightly less money, one can pick up one of the STM32-based “Black Pill” boards, with yet another set of pros and cons. Choices, choices!

With the Pico, Raspberry Pi is entering a crowded field. They’ve got the name recognition, a cool hardware trick, a great value proposition, and a track record of solid documentation. If I were coding up a GPIO-heavy application without the need for wireless, the Pico would be a solid choice, especially if I could make use of the extra core.

I’ll leave you with a teaser: On page 9 of the RP2040 datasheet, they lay out what “2040” stands for: two cores, type M0+, more than 256 kB RAM, and zero kB flash. Does that mean we’ll eventually see models with more RAM, onboard flash, or different ARM cores? RP2050? RP2048? Speculate wildly in the comments.

Read the whole story
jprodgers
194 days ago
reply
This is going to be huge. I can't wait to see what the Pi foundation does in the microcontroller space. The PIOs are a game changer.
Somerville, MA
Share this story
Delete
Next Page of Stories