Hacker, artist, maker that works for the Museum of Science in Boston.
588 stories
·
104 followers

Dangerously wrong oxygen readings in dark-skinned patients spur FDA scrutiny

1 Comment
A nurse uses a pulse oximeter on a patient in Plainfield, New Jersey, on October 26, 2016.

Enlarge / A nurse uses a pulse oximeter on a patient in Plainfield, New Jersey, on October 26, 2016. (credit: Getty | Bloomberg)

For years, studies have found racial bias in common oxygen-measuring devices called pulse oximeters, as well as alarming dangers from inaccurate blood oxygen measurements in dark-skinned patients. Now, the US Food and Drug Administration is summoning its expert advisers to review the problematic devices and consider new recommendations and regulatory actions.

The FDA announced Thursday that its advisory committee—the Anesthesiology and Respiratory Therapy Devices Panel (ARTDP)—would convene on November 1 to discuss pulse oximeters. Until then, the agency renewed emphasis on the safety warning it issued in February 2021, which noted that the ubiquitous devices "may be less accurate in people with dark skin pigmentation."

That warning closely followed a study from December 2020 that highlighted the racial bias of pulse oximeters amid the COVID-19 pandemic. The global spread of a respiratory disease with a hallmark symptom of breathing difficulty sent pulse oximeter usage soaring—elevating the problem of racial disparities. The 2020 study—led by researchers in Michigan and published in the New England Journal of Medicine—found that pulse oximeters were nearly three times more likely to miss dangerously low blood oxygen levels (hypoxemia) in Black patients compared with white patients.

From there, several other studies corroborated the racial bias and highlighted the danger it posed to dark-skinned patients during the pandemic and beyond. But, it certainly wasn't the first study to report the concerning bias. Researchers have long noted the racial disparity, with studies dating as early as 1991.

Dubious devices

Pulse oximeters were developed in the 1970s and have since become a mainstay in routine patient care, with current devices typically clipping onto a finger. They estimate blood oxygen saturation (SpO2) by assessing the relative absorbance of two wavelengths of light (red and infrared, generally) beamed into the finger, plus the pulse-based flow of blood through the arteries.

But, the devices were mainly tested and calibrated on light-skinned patients. Researchers suspect that the high levels of skin pigment, melanin, in dark-skinned patients can interfere with the absorbance measurements. Numerous studies have found that pulse oximeters tend to overestimate oxygen saturation in dark-skinned patients.

The dangers of those faulty readings were realized during the pandemic. A study published in May found that pulse oximeters' overestimation of SpO2 in Black and Hispanic patients with COVID-19 caused significant delays in care, including access to lifesaving treatments, such as dexamethasone. For some patients, the faulty readings meant their eligibility for treatment was never recognized by the devices. That study, led by researchers at Johns Hopkins University, appeared in JAMA Internal Medicine.

In July, another study in JAMA Internal Medicine by researchers in Boston found that darker-skinned patients in intensive care who had inaccurate pulse oximeter readings ended up receiving less supplemental oxygen. Meanwhile, a study published in the same month in BMJ by researchers in Michigan looked at records of more than 30,000 patients at the Veterans Health Administration between 2013 and 2019. It found that Black patients were more likely to have hypoxemia undetected by pulse oximetry. The study notes that hidden hypoxemia is linked to an increased risk of morbidity and mortality.

Read Comments

Read the whole story
jprodgers
8 days ago
reply
When I worked for the Museum of Science in Boston, one of the exhibits had a pulse oximeter in it, but it "took too long" to find a reading. So I went about trying to build one that would work more quickly at the expense of accuracy. I used both off the shelf and bought the sensors directly, and none of them would work with darker skin tones very well. They would take ages to give a reading, and when they did it didn't seem very accurate. Ultimately we dropped the pulse oximeter from the exhibit, because getting anyone to stand still for the time it took for a reading wasn't feasible.
Somerville, MA
Share this story
Delete

The mystery of why some people don’t catch COVID

1 Comment
The mystery of why some people don’t catch COVID

Enlarge (credit: d3sign via Getty Images)

We all know a “COVID virgin,” or “Novid,” someone who has defied all logic in dodging the coronavirus. But beyond judicious caution, sheer luck, or a lack of friends, could the secret to these people’s immunity be found nestled in their genes? And could it hold the key to fighting the virus?

In the early days of the pandemic, a small, tight-knit community of scientists from around the world set up an international consortium, called the COVID Human Genetic Effort, whose goal was to search for a genetic explanation as to why some people were becoming severely sick with COVID while others got off with a mild case of the sniffles.

After a while, the group noticed that some people weren’t getting infected at all—despite repeated and intense exposures. The most intriguing cases were the partners of people who became really ill and ended up in intensive care. “We learned about a few spouses of those people that—despite taking care of their husband or wife, without having access to face masks—apparently did not contract infection,” says András Spaan, a clinical microbiologist at Rockefeller University in New York.

Spaan was tasked with setting up an arm of the project to investigate these seemingly immune individuals. But they had to find a good number of them first. So the team put out a paper in Nature Immunology in which they outlined their endeavor, with a discreet final line mentioning that “subjects from all over the world are welcome.”

The response, Spaan says, was overwhelming. “We literally received thousands of emails,” he says. The sheer volume rushing to sign up forced them to set up a multilingual online screening survey. So far, they’ve had about 15,000 applications from all over the world.

The theory that these people might have preexisting immunity is supported by historical examples. There are genetic mutations that confer natural immunity to HIV, norovirus, and a parasite that causes recurring malaria. Why would COVID be any different, the team rationalized? Yet, in the long history of immunology, the concept of inborn resistance against infection is a fairly new and esoteric one. Only a few scientists even take an interest. “It’s such a niche field, that even within the medical and research fields, it’s a bit pooh-poohed on,” says Donald Vinh, an associate professor in the Department of Medicine at McGill University in Canada. Geneticists don’t recognize it as proper genetics, nor immunologists as proper immunology, he says. This is despite there being a clear therapeutic goal. “If you can figure out why somebody cannot get infected, well, then you can figure out how to prevent people from getting infected,” says Vinh.

But finding immune people is an increasingly tricky task. While many have volunteered, only a small minority fit the narrow criteria of probably having encountered the virus yet having no antibodies against it (which would indicate an infection). The most promising candidates are those who have defied all logic in not catching COVID despite being at high risk: health care workers constantly exposed to COVID-positive patients, or those who lived with—or even better, shared a bed with—people confirmed to be infected.

By the time the team started looking for suitable people, they were working against mass vaccination programs, too. “On the one hand, a lot of people were getting vaccinated, which is great, don’t get me wrong,” says Vinh. “But those are not the people we want.” On the other hand, seeking out the unvaccinated “does invite a bit of a fringe population.” Of the thousands that flooded in after the call, about 800 to 1,000 recruits fit that tight bill.

Then the highly infectious omicron variant arrived. “Omicron has really ruined this project, I have to be honest with you,” says Vinh. It dramatically reduced their pool of candidates. But Spaan views omicron’s desecration in a more positive light: that some recruits survived the omicron waves really lends support to the existence of innate resistance.

Across the Atlantic, in Dublin, Ireland, another member of the group—Cliona O’Farrelly, ​​a professor of comparative immunology at Trinity College Dublin—set about recruiting health care workers at a hospital in Dublin. Of the cohort she managed to assemble, omicron did throw a wrench in the works—half of the people whose DNA they had sent off to be sequenced ended up getting infected with the variant, obviating their presumed resistance. To spread awareness of their research and find more suitable people, O’Farrelly went on the radio and expanded the call to the rest of the country. Again, enthusiasm abounded: More than 16,000 people came forward who claimed to have defied infection. “We’re now trying to deal with all of that,” she says. “I’m hoping that we’ll have one or two hundred from those, which will be unbelievably valuable.”

Now that they have a substantial cohort, the group will take a twofold approach to hunting for a genetic explanation for resistance. First, they’ll blindly run every person’s genome through a computer to see if any gene variation starts to come up frequently. At the same time, they’ll look specifically at an existing list of genes they suspect might be the culprits—genes that if different from usual would just make sense to infer resistance. An example is the gene that codes for the ACE2 receptor, a protein on the surface of cells that the virus uses to slip inside.

The consortium has about 50 sequencing hubs around the world, from Poland to Brazil to Italy, where the data will be crunched. While enrollment is still ongoing, at a certain point, they will have to decide they have enough data to move deeper into their research. “That’s going to be the moment we have people with clear-cut mutations in the genes that make sense biologically,” says Spaan.

Once they come up with a list of gene candidates, it’ll then be a case of narrowing and narrowing that list down. They’ll go through the list one by one, testing each gene’s impact on defenses against COVID in cell models. That process will take between four to six months, Vinh estimates.

Another complication could arise from the global nature of the project; the cohort will be massively heterogeneous. People in Slavic countries won’t necessarily have the same genetic variation that confers resistance as people of Southeast Asian ethnicity. Again, Spaan views this diversity as a plus: “This means that we can correct for ethnic origin in our analysis,” he says. But it also means, Vinh says, that they’re not just looking for one needle in one haystack—”you’re looking for the golden needle and the silver needle and the bronze needle, and you’re looking in the factory of haystacks.”

It’s unlikely to be one gene that confers immunity, but rather an array of genetic variations coming together. “I don’t think it’ll come down to a one-liner on the Excel sheet that says, ‘This is the gene,’” says Vinh. “If it happens to be a single gene, we will be floored.”

After all this work is done, natural genetic resistance will likely turn out to be extremely rare. Still, should they find protective genes, it could help to inform future treatments. There’s good reason to think this: In the 1990s, a group of sex workers in Nairobi, Kenya, defied all logic in failing to become infected with HIV during three years of follow-up testing. It was discovered that some were carrying a genetic mutation that produces a messed-up version of the protein called the CCR5 receptor, one of the proteins that HIV uses to gain entry to a cell and make copies of itself. Having the mutation means HIV can’t latch onto cells, giving natural resistance. This then inspired maraviroc, an antiretroviral used to treat infection, as well as the most promising “cure” for HIV, where two patients received stem cell transplants from a donor carrying the mutation and became HIV free.

It’s also possible that genetics doesn’t tell the full story of those who resist infection against all odds. For some, the reason for their protection might rest instead in their immune system. During the first wave of the pandemic, Mala Maini, a professor of viral immunology at University College London, and her colleagues intensively monitored a group of health care workers who theoretically probably should have been infected with COVID but for some reason hadn’t been. The team also looked at blood samples from a separate cohort of people, taken well before the pandemic. On closer inspection of the two groups’ samples, Maini’s team found a secret weapon lying in their blood: memory T cells—immune cells that form the second line of defense against a foreign invader. These cells, lying dormant from previous dalliances with other coronaviruses, such as the ones that cause the common cold, could be providing cross-protectivity against SARS-CoV-2, her team hypothesized in their paper in Nature in November 2021.

Other studies have supported the theory that these cross-reactive T cells exist and may explain why some people avoid infection. Maini compares the way these memory T cells might quickly attack SARS-CoV-2 to driving a car. If the car is unlike one you’ve ever driven before—a manual for a life-long automatic driver—it would take you a while to get to grips with the controls. But assume the pre-existing T cells are accustomed to automatics, and a SARS-CoV-2 encounter is like hopping into the driver’s seat of one, and you can see how they would launch a much quicker and stronger immune attack.

A previous seasonal coronavirus infection or an abortive COVID infection in the first wave—meaning an infection that failed to take hold—could create T cells that offer this preexisting immunity. But Maini points out a crucial caveat: This does not mean that you can skip the vaccine on the potential basis that you’re carrying these T cells.

More recently, Maini and her colleague Leo Swadling published another paper that looked at cells from the airways of volunteers, which were sampled and frozen before the pandemic. They figured, if the infection is getting shut down so quickly, then surely the cells responsible must be ready and waiting at the first sign of infection. The cohort in the study was small—just 10 people—but six out of the 10 had cross-reactive T cells sitting in their airways.

Off the back of her research, Maini is working on a vaccine with researchers at the University of Oxford that induces these T cells specifically in the mucus membranes of the airway, and which could offer broad protection against not only SARS-CoV-2 but a variety of coronaviruses. Such a vaccine could stop the COVID virus wriggling out of the existing vaccines’ reach, because while the spike protein—the focus of current vaccines—is liable to mutate and change, T cells target bits of viruses that are highly similar across all human and animal coronaviruses.

And a mucosal vaccine could prepare these T cells in the nose and throat, the ground zero of infection, giving COVID the worst shot possible at taking root. “We’re quite optimistic that that sort of approach could provide better protection against new emerging variants, and ideally also against a new transfer of a new animal zoonotic virus,” says Maini.

As for Spaan and his team, they also have to entertain the possibility that, after the slog, genetic resistance against SARS-CoV-2 turns out to be a pipedream. “That’s our fear—that we will do all this and we will find nothing,” says Vinh. “And that’s OK. Because that’s science, right?” O’Farrelly, on the other hand, has undeterred optimism they’ll find something. “You just can’t have people die and not have the equivalent at the other end of the spectrum.”

This story originally appeared on wired.com.

Read Comments

Read the whole story
jprodgers
14 days ago
reply
I know quite a few houses of people that haven't gotten it yet, mine included. We all are vaccinated, still wearing masks out in public, and generally avoiding large indoor gatherings.
Somerville, MA
Share this story
Delete

Bitcoin power plant is turning a 12,000-year-old glacial lake into a hot tub

1 Comment and 3 Shares
In this aerial photo of Greenidge Generation's power plant outside Dresden, NY, Seneca Lake is visible in the background. The lake receives warm water from Greenidge's operations.

Enlarge / In this aerial photo of Greenidge Generation's power plant outside Dresden, NY, Seneca Lake is visible in the background. The lake receives warm water from Greenidge's operations. (credit: Greenidge Generation LLC)

The fossil fuel power plant that a private equity firm revived to mine bitcoin is at it again. Not content to just pollute the atmosphere in pursuit of a volatile crypto asset with little real-world utility, this experiment in free marketeering is also dumping tens of millions of gallons of hot water into glacial Seneca Lake in upstate New York.

“The lake is so warm you feel like you’re in a hot tub,” Abi Buddington, who lives near the Greenidge power plant, told NBC News.

In the past, nearby residents weren’t necessarily enamored with the idea of a pollution-spewing power plant warming their deep, cold water lake, but at least the electricity produced by the plant was powering their homes. Today, they’re lucky if a small fraction does. Most of the time, the turbines are burning natural gas solely to mint profits for the private equity firm Atlas Holdings by mining bitcoin.

Atlas, the firm that bought Greenidge has been ramping up its bitcoin mining aspirations over the last year and a half, installing thousands of mining rigs that have produced over 1,100 bitcoin as of February 2021. The company has plans to install thousands more rigs, ultimately using 85 MW of the station’s total 108 MW capacity. 

Seneca Lake’s water isn’t the only thing the power plant is warming. In December 2020, with the power plant running at just 13 percent of its capacity, Atlas’ bitcoin operations there produced 243,103 tons of carbon dioxide and equivalent greenhouse gases, a ten-fold increase from January 2020 when mining commenced. NOx pollution, which is responsible for everything from asthma, lung cancer, and premature death, also rose 10x.

The plant currently has a permit to emit 641,000 tons CO2e every year, though if Atlas wants to maximize its return on investment and use all 106 MW of the plant’s capacity, its carbon pollution could surge to 1.06 million tons per year. Expect NOx emissions—and health impacts—to rise accordingly. The project’s only tangible benefit (apart from dividends appearing in investors’ pockets) are the company’s claimed 31 jobs.

Sparkling specimen

The 12,000-year-old Seneca Lake is a sparkling specimen of the Finger Lakes region. It still boasts high water quality, clean enough to drink with just limited treatment. Its waters are home to a sizable lake trout population that’s large enough to maintain the National Lake Trout Derby for 57 years running. The prized fish spawn in the rivers that feed the lake, and it’s into one of those rivers—the Keuka Lake Outlet, known to locals for its rainbow trout fishing—that Greenidge dumps its heated water. 

Rainbow trout are highly sensitive to fluctuations in water temperature, with the fish happiest in the mid-50s. Because cold water holds more oxygen, as temps rise, fish become stressed. Above 70˚ F, rainbow trout stop growing and stressed individuals start dying. Experienced anglers don’t bother fishing when water temps get to that point.

Greenidge has a permit to dump 135 million gallons of water per day into the Keuka Lake Outlet as hot as 108˚ F in the summer and 86˚ F in the winter. New York’s Department of Environmental Conservation reports that over the last four years, the plant’s daily maximum discharge temperatures have averaged 98˚ in summer and 70˚ in winter. That water eventually makes its way to Seneca Lake, where it can result in tropical surface temps and harmful algal blooms. Residents say lake temperatures are already up, though a full study won't be completed until 2023.

Casting about for profits

Atlas, the private equity firm, bought the Greenidge power plant in 2014 and converted it from coal to natural gas. The firm initially intended it to be a peaker plant that would sell power to the grid when demand spiked. 

But in the three years that Atlas spent renovating the plant, the world changed. Natural gas, which was once viewed as a bridge fuel, is increasingly being seen as a dead end. Renewable sources like wind and solar continue to plunge in price, so much so that by 2019, the economics of power plants like Greenidge meant that 60% of them didn’t run more than six hours in a row. Today, renewable sources backed by batteries are cheaper than gas-powered peaker plants, and even batteries alone are threatening the fossil behemoths.

Though Atlas spent $60 million retrofitting the old coal plant to run on gas, it didn’t spring for the more advanced combined cycle technology, which would have helped it operate profitably as a peaker. In the search for higher returns, the company landed on bitcoin mining, Greenidge’s CEO told NBC. After a small test suggested that mining would be profitable, the firm plowed significant sums into the project. By the end of the year, Greenidge and Atlas plan to have 18,000 rigs mining at the site with another 10,500 on the horizon. When Atlas’ plans for Greenidge are complete, mining rigs will consume 79% of the plant’s rated capacity. 

Atlas won’t stop there, of course. The firm, through Greenidge Generation Holdings, will lease a building from a bankrupt book and magazine printer and convert it into a datacenter for cryptocurrency mining. Unlike the original Greenidge, this project doesn’t have onsite power, and Atlas claims it’ll use two-thirds “zero carbon” power from sources like nuclear. The rest? Fossil, most likely, and Atlas says it’ll offset emissions from both its Spartanburg and New York operations. But the company hasn’t said how, and many offset programs don’t reduce emissions as claimed.

As for the hot tub that Seneca Lake's residents say it's turning into? There’s no offset for that.

Read Comments

Read the whole story
jprodgers
448 days ago
reply
You don't need a general AI to make digital paperclips when plain old capitalism will do it for you. The experiment has failed, and it needs to be shut down.
Somerville, MA
ChrisDL
447 days ago
reply
New York
Share this story
Delete

How a Docker footgun led to a vandal deleting NewsBlur’s MongoDB database

11 Comments and 19 Shares

tl;dr: A vandal deleted NewsBlur’s MongoDB database during a migration. No data was stolen or lost.

I’m in the process of moving everything on NewsBlur over to Docker containers in prep for a big redesign launching next week. It’s been a great year of maintenance and I’ve enjoyed the fruits of Ansible + Docker for NewsBlur’s 5 database servers (PostgreSQL, MongoDB, Redis, Elasticsearch, and soon ML models). The day was wrapping up and I settled into a new book on how to tame the machines once they’re smarter than us when I received a strange NewsBlur error on my phone.

"query killed during yield: renamed collection 'newsblur.feed_icons' to 'newsblur.system.drop.1624498448i220t-1.feed_icons'"

There is honestly no set of words in that error message that I ever want to see again. What is drop doing in that error message? Better go find out.

Logging into the MongoDB machine to check out what state the DB is in and I come across the following…

nbset:PRIMARY> show dbs
READ__ME_TO_RECOVER_YOUR_DATA   0.000GB
newsblur                        0.718GB

nbset:PRIMARY> use READ__ME_TO_RECOVER_YOUR_DATA
switched to db READ__ME_TO_RECOVER_YOUR_DATA
    
nbset:PRIMARY> db.README.find()
{ 
    "_id" : ObjectId("60d3e112ac48d82047aab95d"), 
    "content" : "All your data is a backed up. You must pay 0.03 BTC to XXXXXXFTHISGUYXXXXXXX 48 hours for recover it. After 48 hours expiration we will leaked and exposed all your data. In case of refusal to pay, we will contact the General Data Protection Regulation, GDPR and notify them that you store user data in an open form and is not safe. Under the rules of the law, you face a heavy fine or arrest and your base dump will be dropped from our server! You can buy bitcoin here, does not take much time to buy https://localbitcoins.com or https://buy.moonpay.io/ After paying write to me in the mail with your DB IP: FTHISGUY@recoverme.one and you will receive a link to download your database dump." 
}

Two thoughts immediately occured:

  1. Thank goodness I have some recently checked backups on hand
  2. No way they have that data without me noticing

Three and a half hours before this happened, I switched the MongoDB cluster over to the new servers. When I did that, I shut down the original primary in order to delete it in a few days when all was well. And thank goodness I did that as it came in handy a few hours later. Knowing this, I realized that the hacker could not have taken all that data in so little time.

With that in mind, I’d like to answer a few questions about what happened here.

  1. Was any data leaked during the hack? How do you know?
  2. How did NewsBlur’s MongoDB server get hacked?
  3. What will happen to ensure this doesn’t happen again?

Let’s start by talking about the most important question of all which is what happened to your data.

1. Was any data leaked during the hack? How do you know?

I can definitively write that no data was leaked during the hack. I know this because of two different sets of logs showing that the automated attacker only issued deletion commands and did not transfer any data off of the MongoDB server.

Below is a snapshot of the bandwidth of the db-mongo1 machine over 24 hours:

You can imagine the stress I experienced in the forty minutes between 9:35p, when the hack began, and 10:15p, when the fresh backup snapshot was identified and put into gear. Let’s breakdown each moment:

  1. 6:10p: The new db-mongo1 server was put into rotation as the MongoDB primary server. This machine was the first of the new, soon-to-be private cloud.
  2. 9:35p: Three hours later an automated hacking attempt opened a connection to the db-mongo1 server and immediately dropped the database. Downtime ensued.
  3. 10:15p: Before the former primary server could be placed into rotation, a snapshot of the server was made to ensure the backup would not delete itself upon reconnection. This cost a few hours of downtime, but saved nearly 18 hours of a day’s data by not forcing me to go into the daily backup archive.
  4. 3:00a: Snapshot completes, replication from original primary server to new db-mongo1 begins. What you see in the next hour and a half is what the transfer of the DB looks like in terms of bandwidth.
  5. 4:30a: Replication, which is inbound from the old primary server, completes, and now replication begins outbound on the new secondaries. NewsBlur is now back up.

The most important bit of information the above chart shows us is what a full database transfer looks like in terms of bandwidth. From 6p to 9:30p, the amount of data was the expected amount from a working primary server with multiple secondaries syncing to it. At 3a, you’ll see an enormous amount of data transfered.

This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

We can also reason that the vandal was not able to access any files that were on the server outside of MongoDB due to using a recent version of MongoDB in a Docker container. Unless the attacker had access to a 0-day to both MongoDB and Docker, it is highly unlikely they were able to break out of the MongoDB server connection.

While the server was being snapshot, I used that time to figure out how the hacker got in.

2. How did NewsBlur’s MongoDB server get hacked?

Turns out the ufw firewall I enabled and diligently kept on a strict allowlist with only my internal servers didn’t work on a new server because of Docker. When I containerized MongoDB, Docker helpfully inserted an allow rule into iptables, opening up MongoDB to the world. So while my firewall was “active”, doing a sudo iptables -L | grep 27017 showed that MongoDB was open the world. This has been a Docker footgun since 2014.

To be honest, I’m a bit surprised it took over 3 hours from when I flipped the switch to when a hacker/vandal dropped NewsBlur’s MongoDB collections and pretended to ransom about 250GB of data. This is the work of an automated hack and one that I was prepared for. NewsBlur was back online a few hours later once the backups were restored and the Docker-made hole was patched.

It would make for a much more dramatic read if I was hit through a vulnerability in Docker instead of a footgun. By having Docker silently override the firewall, Docker has made it easier for developers who want to open up ports on their containers at the expense of security. Better would be for Docker to issue a warning when it detects that the most popular firewall on Linux is active and filtering traffic to a port that Docker is about to open.

The second reason we know that no data was taken comes from looking through the MongoDB access logs. With these rich and verbose logging sources we can invoke a pretty neat command to find everybody who is not one of the 100 known NewsBlur machines that has accessed MongoDB.


$ cat /var/log/mongodb/mongod.log | egrep -v "159.65.XX.XX|161.89.XX.XX|<< SNIP: A hundred more servers >>"

2021-06-24T01:33:45.531+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26003 #63455699 (1189 connections now open)
2021-06-24T01:33:45.635+0000 I NETWORK  [conn63455699] received client metadata from 171.25.193.78:26003 conn63455699: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.010+0000 I NETWORK  [listener] connection accepted from 171.25.193.78:26557 #63455724 (1189 connections now open)
2021-06-24T01:33:46.092+0000 I NETWORK  [conn63455724] received client metadata from 171.25.193.78:26557 conn63455724: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:33:46.500+0000 I NETWORK  [conn63455724] end connection 171.25.193.78:26557 (1198 connections now open)
2021-06-24T01:33:46.533+0000 I NETWORK  [conn63455699] end connection 171.25.193.78:26003 (1200 connections now open)
2021-06-24T01:34:06.533+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:10056 #63456621 (1266 connections now open)
2021-06-24T01:34:06.627+0000 I NETWORK  [conn63456621] received client metadata from 185.220.101.6:10056 conn63456621: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:06.890+0000 I NETWORK  [listener] connection accepted from 185.220.101.6:21642 #63456637 (1264 connections now open)
2021-06-24T01:34:06.962+0000 I NETWORK  [conn63456637] received client metadata from 185.220.101.6:21642 conn63456637: { driver: { name: "PyMongo", version: "3.11.4" }, os: { type: "Linux", name: "Linux", architecture: "x86_64", version: "5.4.0-74-generic" }, platform: "CPython 3.8.5.final.0" }
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - starting
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping 1 collections
2021-06-24T01:34:08.018+0000 I COMMAND  [conn63456637] dropDatabase config - dropping collection: config.transactions
2021-06-24T01:34:08.020+0000 I STORAGE  [conn63456637] dropCollection: config.transactions (no UUID) - renaming to drop-pending collection: config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 }
2021-06-24T01:34:08.029+0000 I REPL     [replication-14545] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 1), t: -1 })
2021-06-24T01:34:08.030+0000 I STORAGE  [replication-14545] Finishing collection drop for config.system.drop.1624498448i1t-1.transactions (no UUID).
2021-06-24T01:34:08.030+0000 I COMMAND  [conn63456637] dropDatabase config - successfully dropped 1 collections (most recent drop optime: { ts: Timestamp(1624498448, 1), t: -1 }) after 7ms. dropping database
2021-06-24T01:34:08.032+0000 I REPL     [replication-14546] Completing collection drop for config.system.drop.1624498448i1t-1.transactions with drop optime { ts: Timestamp(1624498448, 1), t: -1 } (notification optime: { ts: Timestamp(1624498448, 5), t: -1 })
2021-06-24T01:34:08.041+0000 I COMMAND  [conn63456637] dropDatabase config - finished
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - starting
2021-06-24T01:34:08.398+0000 I COMMAND  [conn63456637] dropDatabase newsblur - dropping 37 collections

<< SNIP: It goes on for a while... >>

2021-06-24T01:35:18.840+0000 I COMMAND  [conn63456637] dropDatabase newsblur - finished

The above is a lot, but the important bit of information to take from it is that by using a subtractive filter, capturing everything that doesn’t match a known IP, I was able to find the two connections that were made a few seconds apart. Both connections from these unknown IPs occured only moments before the database-wide deletion. By following the connection ID, it became easy to see the hacker come into the server only to delete it seconds later.

Interestingly, when I visited the IP address of the two connections above, I found a Tor exit router:

This means that it is virtually impossible to track down who is responsible due to the anonymity-preserving quality of Tor exit routers. Tor exit nodes have poor reputations due to the havoc they wreak. Site owners are split on whether to block Tor entirely, but some see the value of allowing anonymous traffic to hit their servers. In NewsBlur’s case, because NewsBlur is a home of free speech, allowing users in countries with censored news outlets to bypass restrictions and get access to the world at large, the continuing risk of supporting anonymous Internet traffic is worth the cost.

3. What will happen to ensure this doesn’t happen again?

Of course, being in support of free speech and providing enhanced ways to access speech comes at a cost. So for NewsBlur to continue serving traffic to all of its worldwide readers, several changes have to be made.

The first change is the one that, ironically, we were in the process of moving to. A VPC, a virtual private cloud, keeps critical servers only accessible from others servers in a private network. But in moving to a private network, I need to migrate all of the data off of the publicly accessible machines. And this was the first step in that process.

The second change is to use database user authentication on all of the databases. We had been relying on the firewall to provide protection against threats, but when the firewall silently failed, we were left exposed. Now who’s to say that this would have been caught if the firewall failed but authentication was in place. I suspect the password needs to be long enough to not be brute-forced, because eventually, knowing that an open but password protected DB is there, it could very possibly end up on a list.

Lastly, a change needs to be made as to which database users have permission to drop the database. Most database users only need read and write privileges. The ideal would be a localhost-only user being allowed to perform potentially destructive actions. If a rogue database user starts deleting stories, it would get noticed a whole lot faster than a database being dropped all at once.

But each of these is only one piece of a defense strategy. As this well-attended Hacker News thread from the day of the hack made clear, a proper defense strategy can never rely on only one well-setup layer. And for NewsBlur that layer was a allowlist-only firewall that worked perfectly up until it didn’t.

As usual the real heros are backups. Regular, well-tested backups are a necessary component to any web service. And with that, I’ll prepare to launch the big NewsBlur redesign later this week.

Read the whole story
samuel
456 days ago
reply
What a week. In other news, new blog design launched!
Cambridge, Massachusetts
deezil
455 days ago
Thanks for being above-board with all this! The HackerNews comment section was a little brutal towards you about some things, but I like that you've been transparent about everything.
samuel
455 days ago
HN only knows how to be brutal, which I always appreciate.
acdha
454 days ago
Thanks for writing this up. That foot-gun really needs fixing.
jprodgers
455 days ago
reply
Somerville, MA
popular
456 days ago
reply
Share this story
Delete
10 public comments
creditappear
23 days ago
reply
https://creditappear.com/
seriousben
454 days ago
reply
Great root cause analysis of a security incident.
Canada
chrisrosa
455 days ago
reply
Great write up Samuel. And kudos for your swift and effective response.
San Francisco, CA
jshoq
456 days ago
reply
This is a great account on how to recover a service from a major outage. In this case, NewsBlur was attacked by a scripter that used a well known hole to attack the system. In this case, a well planned and validated backup setup helped NewsBlur to get their service back online quickly. This is a great read of a blameless post mortem executed well.
JS
Seattle, WA
jqlive
456 days ago
reply
Thanks for the write up, it was interesting to read and very transparent of you. It would be an interesting read to know how you'll be applying ML Models to Newsblur.
CN/MX
BLueSS
456 days ago
reply
Thanks, Samuel, for your hard work and efforts keeping NewsBlur alive!
jepler
456 days ago
reply
My most commented HN story yet :)
Earth, Sol system, Western spiral arm
jgbishop
456 days ago
reply
Nice writeup.
Durham, NC
fxer
456 days ago
reply
> the hacker come into the server only to delete it seconds later.

> This tells us that the hacker was an automated digital vandal rather than a concerted hacking attempt. And if we were to pay the ransom, it wouldn’t do anything because the vandals don’t have the data and have nothing to release.

Guess they count on users not having enough monitoring to be able to confirm no data was exfil’d
Bend, Oregon
DMack
456 days ago
I remember reading about the mongodb image's terrible defaults ON newsblur, probably even one of your shares. Very surprised to learn that it's still a thing, especially after the waves it made back then
JayM
456 days ago
reply
Bummer. But glad all was well in the end. Yay backups.
Atlanta, GA

Linkdump: May 2021

1 Comment

Read the whole story
jprodgers
496 days ago
reply
The first link blew me away, but these monthly link dumps are always worth checking out.
Somerville, MA
Share this story
Delete

Coding on Raspberry Pi remotely with Visual Studio Code

1 Comment

Jim Bennett from Microsoft, who showed you all how to get Visual Studio Code up and running on Raspberry Pi last week, is back to explain how to use VS Code for remote development on a headless Raspberry Pi.

Like a lot of Raspberry Pi users, I like to run my Raspberry Pi as a ‘headless’ device to control various electronics – such as a busy light to let my family know I’m in meetings, or my IoT powered ugly sweater.

The upside of headless is that my Raspberry Pi can be anywhere, not tied to a monitor, keyboard and mouse. The downside is programming and debugging it – do you plug your Raspberry Pi into a monitor and run the full Raspberry Pi OS desktop, or do you use Raspberry Pi OS Lite and try to program and debug over SSH using the command line? Or is there a better way?

Remote development with VS Code to the rescue

There is a better way – using Visual Studio Code remote development! Visual Studio Code, or VS Code, is a free, open source, developer’s text editor with a whole swathe of extensions to support you coding in multiple languages, and provide tools to support your development. I practically live day to day in VS Code: whether I’m writing blog posts, documentation or Python code, or programming microcontrollers, it’s my work ‘home’. You can run VS Code on Windows, macOS, and of course on a Raspberry Pi.

One of the extensions that helps here is the Remote SSH extension, part of a pack of remote development extensions. This extension allows you to connect to a remote device over SSH, and run VS Code as if you were running on that remote device. You see the remote file system, the VS Code terminal runs on the remote device, and you access the remote device’s hardware. When you are debugging, the debug session runs on the remote device, but VS Code runs on the host machine.

Photograph of Raspberry Pi 4
Raspberry Pi 4

For example – I can run VS Code on my MacBook Pro, and connect remotely to a Raspberry Pi 4 that is running headless. I can access the Raspberry Pi file system, run commands on a terminal connected to it, access whatever hardware my Raspberry Pi has, and debug on it.

Remote SSH needs a Raspberry Pi 3 or 4. It is not supported on older Raspberry Pis, or on Raspberry Pi Zero.

Set up remote development on Raspberry Pi

For remote development, your Raspberry Pi needs to be connected to your network either by ethernet or WiFi, and have SSH enabled. The Raspberry Pi documentation has a great article on setting up a headless Raspberry Pi if you don’t already know how to do this.

You also need to know either the IP address of the Raspberry Pi, or its hostname. If you don’t know how to do this, it is also covered in the Raspberry Pi documentation.

Connect to the Raspberry Pi from VS Code

Once the Raspberry Pi is set up, you can connect from VS Code on your Mac or PC.

First make sure you have VS Code installed. If not, you can install it from the VS Code downloads page.

From inside VS Code, you will need to install the Remote SSH extension. Select the Extensions tab from the sidebar menu, then search for Remote development. Select the Remote Development extension, and select the Install button.

Next you can connect to your Raspberry Pi. Launch the VS Code command palette using Ctrl+Shift+P on Linux or Windows, or Cmd+Shift+P on macOS. Search for and select Remote SSH: Connect current window to host (there’s also a connect to host option that will create a new window).

Enter the SSH connection details, using user@host. For the user, enter the Raspberry Pi username (the default is pi). For the host, enter the IP address of the Raspberry Pi, or the hostname. The hostname needs to end with .local, so if you are using the default hostname of raspberrypi, enter raspberrypi.local.

The .local syntax is supported on macOS and the latest versions of Windows or Linux. If it doesn’t work for you then you can install additional software locally to add support. On Linux, install Avahi using the command sudo apt-get install avahi-daemon. On Windows, install either Bonjour Print Services for Windows, or iTunes for Windows.

For example, to connect to my Raspberry Pi 400 with a hostname of pi-400 using the default pi user, I enter pi@pi-400.local.

The first time you connect, it will validate the fingerprint to ensure you are connecting to the correct host. Select Continue from this dialog.

Enter your Raspberry Pi’s password when promoted. The default is raspberry, but you should have changed this (really, you should!).

VS Code will then install the relevant tools on the Raspberry Pi and configure the remote SSH connection.

Code!

You will now be all set up and ready to code on your Raspberry Pi. Start by opening a folder or cloning a git repository and away you go coding, debugging and deploying your applications.

In the remote session, not all extensions you have installed locally will be available remotely. Any extensions that change the behavior of VS Code as an application, such as themes or tools for managing cloud resources, will be available.

Things like language packs and other programming tools are not installed in the remote session, so you’ll need to re-install them. When you install these extensions, you’ll see the Install button has changed to Install in SSH:< hostname > to show it’s being installed remotely.

VS Code may seem daunting at first – it’s a powerful tool with a huge range of extensions. The good news is Microsoft has you covered with lots of hands-on, self-guided learning guides on how to use it with different languages and development tools, from using Git version control, to developing web applications. There’s even a guide to learning Python basics with Wonder Woman!

Jim with his arms folded wearing a dark t shirt
Jim Bennett

You remember Jim – his blog Expecting Someone Geekier is well good. You can find him on Twitter @jimbobbennett and on github.

The post Coding on Raspberry Pi remotely with Visual Studio Code appeared first on Raspberry Pi.

Read the whole story
jprodgers
584 days ago
reply
VS Code has been my IDE for years now, and I love to see how much work they've put into making it really work on the Raspberry PI. The PI is going to be a solid development platform quite quickly I imagine, kind of already is with this.
Somerville, MA
lousyd
584 days ago
But... why? You could develop on so many other things. And you'd choose the Pi? I mean... I wouldn't.
jprodgers
528 days ago
@lousyd I didn't say this was the best development platform, but it is extremely capable for what it is right out of the box. The latest PI 4 with 8GB of ram runs well enough, but MS has put some effort into optimizing VS Code for the PI, and it is a fully functional IDE. The toolchains for the PICO, and Arduino are single line installs. I could probably set an entire PI 4 environment in the time it takes me to pull down all the things I'd need on a Win10 or Mac computer. Also, no problem running KiCAD, so you could also make your PCBs on the same computer for a very low cost. What they've accomplished here is truly remarkable. Lots of kids will be getting into developing their own projects cheaply and quickly.
Share this story
Delete
Next Page of Stories