Artificial Intelligence, which the Internet of course facilitates, has been one of the main news items lately. We are beginning to panic about the future. A RxISK post We Will Get Fooled Again picks up some of the mounting concern.
The last sentence in the text above introducing Imagining the Internet Centre perhaps unintentionally catches the worries – A better tomorrow must be fueled by applied foresight today.
This post by Ryan Horath came to my attention through a listserve both he and I are on. Most of the people on this list are concerned about Pharma and its devious ways. Ryan is a lawyer and software developer with research interests in technology law. He stands to one side telling us that the key to actions you are concerned about might lie over here in an area you do not at present seem to be aware of.
Curious about what the 26 words are, ironic as it turns out, I Googled the Communications and Decency Act and moved on from there to Section 230 which gets Wikipedia posts in its own right. These are things that I imagine Ryan and lots of other figure the rest of us should know all about – perhaps figure we know nothing about what is going on in the world if we don’t what the 26 words are.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Over to Ryan.
The Twenty-Six Words
Section 230 of the Communications Decency Act has been called “The Twenty-Six Words That Created the Internet”. Proponents of the law, who seem to be everywhere, claim that modern websites like social media would not exist without Section 230.
Is this true? Are we being told the truth about Section 230 or have so-called consumer interest groups weaved a web of misleading tales to sell the public on a law that privileges their funders – the technology industry?
Imagine if my local government passed an ordinance prohibiting the use of loudspeakers in city parks after sunset and before sunrise. This is known in the law as a time, place, or manner restriction. Such restrictions are legal if the government can provide appropriate justification. In this case, the law is meant to protect people who live near the parks from sleep disturbance. It would easily pass Constitutional review and be held valid by a court.
Now imagine the local government passes another law that exempts city council members from this law. Or all Democrats. Or people who work for newspapers. Civil liberties groups would immediately recognize this as an unfair privilege to speak for the groups with the exemption. Everyone else is restricted while the groups with the exemption are not.
Rarely, such privileges are justifiable, but they must pass the strictest Constitutional test, called strict scrutiny. A court would ask whether:
- There is an appropriate goal for the privilege,
- Whether the law is as narrowly tailored to serve that goal as possible,
- Whether no other means to serve that goal exists that is less restrictive than the law in question.
Only a law that satisfies all three of these criteria will survive strict scrutiny. Few laws survive such intense review.
This presents an obvious problem for Section 230. The state laws it overrides – known as intermediary liability laws – are restrictions on the First Amendment rights of publishers and distributors of speech. The Supreme Court has defined the limits of these restrictions but otherwise held them to be valid restrictions for appropriate policy reasons.
Section 230 provides an exemption to these valid restrictions for online publishers and distributors only. This raises the same serious Constitutional scrutiny as the park loudspeaker example above. I have done a detailed analysis of this Constitutional question in a separate paper – see Horath.
Without going into that analysis here, Section 230 cannot survive strict scrutiny.
If this First Amendment problem is so obvious – and it is – how can it be that it has never been raised in all this time since Section 230’s passage? How can it be that no one even discusses such an obvious issue?
While it is difficult to know the motives of so many different entities, the short answers are likely ideology and money.
Early Internet advocacy groups like the Electronic Frontier Foundation (EFF) were deeply libertarian and viscerally anti-government. They saw the government as only a force for evil and saw themselves – the leaders of the early Internet – as only a force for good. For years they searched for a way to freeze the government out of the Internet so they could turn it into the utopia they envisioned, free from government interference.
This anti-government ideology was the driving force behind the drafting of Section 230. Groups like EFF and their offshoot, the Center for Democracy and Technology (CDT), allied with industry leaders – who they saw as benevolent forces – to keep the evil government at bay. Having staked their reputations on this cause, they have been unable and unwilling to reassess their earlier positions, locked in a sunk cost trap where time works to harden their position.
The other ingredient is money. Technology industry leaders and companies – especially Google – have flooded these same organizations and academia with money for decades. Their chosen people hold high positions with influence in academia and are routinely called upon to comment on Section 230. Control of academia is so complete that it goes unnoticed because it is considered normal.
For example, Daphne Keller is the Director of the Program on Platform Regulation at Stanford Law School. Prior to moving to Stanford, Keller held a similar position at Google and led the legal team in charge of their search product – Google’s most important product. Keller routinely comments on Section 230 in the media and her prior affiliation with Google is rarely mentioned.
Tech academia and Internet advocacy groups are plagued by similar conflicts of interest and they too are rarely mentioned in the media. The Tech Transparency Project calculated that 44 entities who submitted briefs in a recent Supreme Court case siding with Google had ties to Google itself. Many of these entities are a who’s who of the tech, academic and Internet advocacy world. It is difficult to find academics or groups that do not have substantial ties to Google or some other technology giant.
Google says a ‘broad cross-section’ of experts, academics, and organizations support its legal position. (The company is paying many of them).
Together with Internet advocacy groups, tech academia has long dominated discussion of Section 230. They control the debate and propagandize the public with misleading claims about the wonders of their law. These groups surely know that Section 230 cannot survive the Constitutional analysis I have outlined. Their solution to this has been to ignore this problem and ensure it does not enter public debate, while misleading the public with simplistic talking points that sound attractive, but are meant to deceive.
A common refrain is that Section 230 protects the First Amendment rights of websites. This is misleading because what it actually does is privilege their First Amendment rights over those of all other publishers and distributors of speech.
The next time someone recites this line in defense of Section 230, be sure to respond
“That is the problem.”
Healy Footnote
Will A.I. be privileged in this way or will this be the moment when the rubber hits the road?
annie says
‘Most of the people on this list are concerned about Pharma and its devious ways.’
“The Twenty-Six Words That Broke the Internet” – see Horath.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Section 230(c)(2) further provides “Good Samaritan” protection from civil liability for operators of interactive computer services in the good faith removal or moderation of third-party material they deem “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
see Horath.
It is difficult to exaggerate how extraordinary the drafting process was for Section 230. Imagine if Pfizer had lost a court case and then sat down with two legislators and rewrote the law so they could carry on their business as they desired without losing another lawsuit. The public would be up in arms. Yet that is precisely what happened with Section 230. But it is much worse than that.
An example of the dangerous consequences of this power to suppress speech is the pharmaceutical industry. Researchers in evidence-based medicine worry that advertising purchases by pharmaceutical companies will give them control over how some content is censored on social media. These companies already have substantial control over print media, radio, television, scientific journals, and university research departments through advertising and research money. They also exert influence over government agencies in multiple ways, including money and the revolving door of employees moving between industry and government. Over time, they will seek to exert their influence on how social media posts are censored.
Pharmaceutical companies will become the ultimate judge of whether some posts are considered “misinformation” and censored online.
Billionaire AI wars as Elon Musk says STOP but Bill Gates urges ‘age of bots’
Telsa billionaire Elon Musk joined thousands in signing an open letter calling for a halt of artificial intelligence – the next day, Microsoft boss Bill Gates penned a blog opposing the demands
https://www.mirror.co.uk/news/us-news/billionaire-ai-wars-elon-musk-29656122
Bill Gates
@BillGates
·
Everyone should benefit from artificial intelligence, and not just people in rich countries. This is the priority for my own work.
Bill Gates just published a 7-page letter about AI and his predictions for its future
https://www.businessinsider.com/bill-gates-ai-letter-chatbots-future-predictions-2023-3?r=US&IR=T
“or to provide medical advice to people where doctors aren’t easily accessible.”
“AI is already used in healthcare to analyze medical data and design drugs, Gates wrote, but the next wave of AI tools could assist with predicting medication side effects and calculating dosage levels.”
I think Pfizer would be happy with that;
“That is the problem.” …
annie says
When it was formed in 2010, Moderna developed its entire drug discovery and manufacturing processes around digital or the idea of being infused by artificial intelligence. It was a biotech startup born in the Amazon Web Services (AWS) cloud.
https://www.zdnet.com/article/moderna-leveraging-its-ai-factory-to-revolutionise-the-way-diseases-are-treated/
Edward Dowd Retweeted
The Vigilant Fox
@VigilantFox
HUGE: Court Orders 24,000 Pages of Moderna Documents to Start Releasing to the Public in July • Of the 24,000 pages, 13,685 detail adverse events. • Attorney @AaronSiriSG , who was responsible for the release of the Pfizer documents, also has a pending lawsuit against Moderna.
@DailyClout @NaomiRWolf
https://twitter.com/VigilantFox/status/1645578142923345925
susanne says
Hi ! From Chris in the UK NHS (National (Dysfunctional) Health Service)
Using AI and Machine Learning to aid Prescription
Processing
ChrisSuter 640×640-2
Hi, I’m Chris Suter and I’m the Head of Digital Platforms and Innovation for all digital, insight and technology solutions here at the NHS Business Services Authority (NHSBSA).
I joined in February 2017 and since then; I have been engulfed in the world where artificial intelligence (AI) and Machine Learning (ML) is moving from a science fiction concept to the forefront of patient and clinicians day to day working life within the healthcare space.
As an Arm’s Length Body of the Department of Health and Social Care, The NHSBSA provides a range of services to NHS organisations, NHS contractors, patients and the public. These include Dental Services, Prescription Services and NHS Jobs. The NHSBSA supports the processing of prescriptions created predominantly in primary care and we are responsible for processing on average 45 million prescriptions per month, of these 18 million are paper and the rest are received via the Electronic Prescription Service (EPS). The average cost per prescription for the NHS is £20, which results in around £9.4 billion pounds paid every year, which represents approximately 10% of the overall annual NHS costs.
I wanted to look further into identifying areas where we see the most inefficiencies, which in turn have an effect on services, lead to frustration and have a cost impact, and see how we could improve this.
For the last 16 weeks we have carried out a piece of work to see if we can introduce Machine Learning into the prescription processing function to remove years’ worth of hot fixing and workarounds which include manual checks and validations.
Historically…
Prescription processing is carried out every month by the NHSBSA, and includes on average 11,000+ pharmacies boxing up tons of paper prescriptions and sending them to the NHSBSA to scan and process. As you will appreciate not all prescriptions look the same, some are printed, some are electronic and not to mention the handwritten ones which can be often hard to make out!
When introducing any system you can’t predict the future and this is usually the main driver for continued change. Unfortunately, this can result in future compromises and complexities, and the NHSBSA is no exception to this challenge. The current process and systems restricts our ability to make fast paced changes and offer solutions for better access to more detailed data.
What we set out to achieve using Machine Learning…
With any innovation concept, the goal is to take the current system, ways of working, and see if new technologies offer improvements. These can be operating efficiencies, cost savings or removing complexity.
To achieve this we have been working closely with our cloud and technology partners Microsoft and Amazon Web Services, working together to understand the issues and concerns within the current process and what technologies where available to help. We started with three main goals:
Reading accuracy
As close to 100% prescriptions data automatically extracted from printed paper prescriptions. This would include patient and drug data. Determine what was possible when faced with handwritten prescriptions.
Processing Improvements
Determine what efficiencies were possible in the end to end process.
Future Use Cases
Opportunities for our partners to add value from the data we collect (i.e. anomaly detection, medical insight, drug usage and statistical analysis).
Security is Job Zero…
As it has been drilled into me for a number of years, we are all responsible for the security of the systems we create. The introduction of AI and Machine Learning should also follow the same rules and boundaries with no exceptions.
This piece of work needed to consider the security and controls of sensitive Patient Identifiable Information (PII). Therefore we carried out a number of assurances against the best practice guides technologies including the:
• “Code of Conduct for the use of AI” within the NHS from the Department of Health and Social Care
• “Calidicott Principles” from the Department of Health and Social Care
• “Cloud Security Principles” from the National Cyber Security Centre
Working as One…
From the get go, we wanted to embed the NHSBSA teams from Development, Platform and Data with our partners’ technology teams from Microsoft and Amazon Web Services. This was to enable cross training, knowledge sharing and understanding of the Machine Learning methodology. This is critical to ensure we were self-sufficient to house future developments, upgrades and technology adoption.
Over the 16 weeks of working with our partners, this been one of the most surprising outcomes. Interaction between the teams from the NHSBSA and our partners has been a huge success, with energetic two way conversations taking place and knowledge flowing both ways, which has helped generate further insights into the process and achieved a better outcome.
The Outcome…
From a standing start, with just an idea of using new technologies, we have been able to build a number of micro-services. These are able to take a prescription image and accurately extract data, carry out validation and create insights. The microservices include:
Image re-processing
Dealing with toner marks, human tick marks, misalignment, handwritten amendments and print alignments.
Machine Learning models to extract data
Using image extraction models, text extraction models, medical and drug models.
Image processing and storing at scale
Using cloud technologies to provide compute and storage using serverless technologies and software as a service package.
Validation of the data captured
Utilising the current data sources the NHSBSA has, including repeat prescription data, NHS drug catalogue, Prescriber register and Pharmacy register.
Data Analytics
Creation of a data analytic module to enable analysis. This could be for current medication trends, future drug forecasting for payment and supply, and creating health dashboards.
Reduction of Processing Cost
Reducing the number of manual resources required to extract data, this enables the NHSBSA to invest and transfer these people to more value added activities.
Unlocking Potential…
One of the key outcomes of this activity is the future potential of the solution we have created. Not only will it solve current problems, but it also has been designed to be easily transferable to other paper-based scenarios not only in the NHSBSA but potentially within the entire NHS family.
Internally, the NHSBSA now has a scanning facility, a secure cloud platform and the skills to programme and train machine learning models. This has the potential to benefit other areas of the business that heavily rely on paper forms such as Pensions, Student Bursary and Citizen Services.
What Next…
So as the 16 weeks proof of concept is now up it doesn’t stop there, the NHSBSA along with Microsoft and Amazon Web Services, are planning to implement this service at scale over the next six months. In addition to this, we will be further validating the findings and seek to expand and take full of advantage of this solution into other form types.
mary H says
In my experience there is nothing to be proud of as regards this “outsourcing” of patients’ prescriptions.
Many months ago now, our GP surgery ordered me a Covid test kit as I had a bad cough. It was ‘outsourced’ to Amazon – and it took well over a week to get to our address, way too late for the test that the GP wanted done. How is that working better than picking up from the local chemist?
Today, I went to the chemist to collect my husband’s ‘outsourced’ repeat prescription. It turned out that there were two for him, both exactly the same – one dated end of last week and one a couple of days earlier. How is that supposed to be saving money? As it happens, in the case of this particular medication, it will keep to be used over the next two months but that wouldn’t be the case with all medications would it.
susanne says
1 of 1
‘Depression medication raises risk of suicide’ – article in The Telegraph newspaper
Inbox
IIPDW
IIPDW logo
Yesterday the UK’s Telegraph newspaper published:
“Antidepressants increase the risk of suicide for some patients, scientists warn”
“Research into reports of people taking their own life showed drugs can be the motive and also give them the means.
“Antidepressants raise the risk of suicide while also giving people the means to kill themselves, scientists have warned, after discovering thousands of inquests linked to the drugs….”
annie says
Important…
John Read
@ReadReadj
·
7m
Full article, free. Antidepressants and Suicide: 7,829 Inquests in England and Wales
@Institute_PDW@markhoro@joannamoncrieff@CEP_UK@JDaviesPhD@jf_moore@ClinpsychLucy@peterkinderman@PGtzsche@PCGroot@UEL_News@OlgaOruncima@JamesScurryUK
https://connect.springerpub.com/content/sgrehpp/25/1/8
recovery&renewal Retweeted
Dan Johnson
@DanJohnsonAB
·
6h
Replying to
@JDaviesPhD
The evidence for increased risk of adolescents and adults remains clear, yet MDs prescribe it, based on vague understanding, and minimize the risk if they mention it at all.
https://pubmed.ncbi.nlm.nih.gov/33685964/
recovery&renewal Retweeted
Katinka Blackford Newman
@antideprisks
·
9h
Antidepressants increase the risk of suicide for some patients, scientists warn
Research into reports of people taking their own life showed drugs can be the motive and also give them the means
BySarah Knapton, SCIENCE EDITOR17 April 2023 • 5:25pm
https://www.telegraph.co.uk/news/2023/04/17/antidepressants-suicide-drugs-prozac-research/
The inquest reports were gathered by the Antidepaware website …
annie says
Artificial Intelligence
I Was There When: AI helped create a vaccine
I Was There When is an oral history project that’s part of the In Machines We Trust podcast. It features stories of how breakthroughs and watershed moments in artificial intelligence and computing happened, as told by the people who witnessed them. In this episode we meet Dave Johnson, the chief data and artificial intelligence officer at Moderna.
https://www.technologyreview.com/2022/08/26/1058743/i-was-there-when-ai-helped-create-a-vaccine-covid-moderna-mrna/
Jennifer Strong: The genetic sequence of the COVID-19 virus was first published in January 2020.
It kicked off an international sprint to develop a vaccine… and represented an unprecedented collaboration between the pharmaceutical industry and governments around the world.
And it worked.
Months later, the U.S Government approved emergency authorizations for multiple vaccines.
I’m Jennifer Strong, and this is I Was There When—an oral history project featuring the stories of breakthroughs and watershed moments in AI and computing… as told by those who witnessed them.
This episode, we meet Dave Johnson, the chief data and artificial intelligence officer at Moderna.
Dave Johnson: Moderna is a biotech company that was founded on the promise of mRNA technology.
My name is Dave Johnson. I’m chief data and AI officer at Moderna. mRNA is essentially an information molecule. It’s encoded, a sequence of amino acids, which when they enter the cell in your body, it produces a protein and that protein can perform a variety of different functions in your body from curing a rare disease, potentially attacking cancer, or even a vaccine to battle of virus like we’ve seen with Covid.
What’s so fundamentally different about this approach from the typical pharmaceutical development is it’s much more of a design approach. We’re saying we know what we want to do. And then we’re trying to design the right information molecule, the right protein, that will then have that effect in the body.
And if you know anything about pharmaceutical development, it tends to be a very serial process. You know, you start with some kind of initial concept, some initial idea and you test it in Petri dishes or in, you know, small experiments. And then you move on to preclinical testing. And if all of that looks good, then you’re finally moving off to, to human testing and you go through several different phases of clinical trials where phase three is the, the largest one where you’re proving the efficacy of this drug.
And that whole process from end to end can be immensely expensive, cost billions of dollars and take, you know, up to a decade to do that. And in many cases, it still fails. You know, there’s countless diseases out there right now that have no vaccine for them, that have no treatment for them. And it’s not like people haven’t tried, it’s just, they’re, they’re challenging.
And so we built the company thinking about: how can we reduce those timelines? How can we target many, many more things? And so that’s how I kind of entered into the company. You know, my background is in software engineering and data science. I actually have a PhD in what’s called information physics—which is very closely related to data science.
And I started when the company was really young, maybe a hundred, 200 people at the time. And we were building that early preclinical engine of a company, which is, how can we target a bunch of different ideas at once, run some experiments, learn really fast and do it again. Let’s run a hundred experiments at once and let’s learn quickly and then take that learning into the next stage.
So if you wanna run a lot of experiments, you have to have a lot of mRNA. So we built out this massively parallel robotic processing of mRNA, and we needed to integrate all of that. We needed systems to kind of drive all of those, uh, robotics together. And, you know, as things evolved as you capture data in these systems, that’s where AI starts to show up. You know, instead of just capturing, you know, here’s what happened in an experiment, now you’re saying let’s use that data to make some predictions.
Let’s take out decision making away from, you know, scientists who don’t wanna just stare and look at data over and over and over again. But let’s use their insights. Let’s build models and algorithms to automate their analyses and, you know, do a much better job and much faster job of predicting outcomes and improving the quality of our, our data.
So when Covid showed up, it was really, uh, a powerful moment for us to take everything we had built and everything we had learned, and the research we had done and really apply it in this really important scenario. Um, and so when this sequence was first released by Chinese authorities, it was only 42 days for us to go from taking that sequence, identifying, you know, these are the mutations we wanna do. This is the protein we want to target.
Forty-two days from that point to actually building up clinical-grade, human safe manufacturing, batch, and shipping it off to the clinic—which is totally unprecedented. I think a lot of people were surprised by how fast it moved, but it’s really… We spent 10 years getting to this point. We spent 10 years building this engine that lets us move research as quickly as possible. But it didn’t stop there.
We thought, how can we use data science and AI to really inform the, the best way to get the best outcome of our clinical studies. And so one of the first big challenges we had was we have to do this large phase three trial to prove in a large number, you know, it was 30,000 subjects in this study to prove that this works, right?
That’s a huge study. Covid had been flaring, um, infecting countless people. And we had to figure out: where do we run our studies? We’re gonna pick a hundred locations in the US to run this study and we needed to balance finding places where we have kind of the right racial diversity that’s the right makeup for the country.
We needed to balance… kind of practical concerns. If we need a, you know, the right size facility and clinical trial sites that can deliver quality data. And we need to find places where Covid has not already hit. So at the time New York, for example, was already heavily hit. And so it wouldn’t be an ideal place to run a clinical study because we have to accrue cases of it.
So we had to find places that weren’t quite yet hit, but places that we expected to actually, you know, surge, you know, maybe six weeks after the study started after people had been inoculated. So that’s a really challenging problem we had to solve. And I wanna say, you know, we, we didn’t do this all entirely internally.
We worked with countless external partners. And I can’t tell you the number of different epidemiology models that we saw. It seemed like everybody was an epidemiologist all of a sudden. But we incorporated all that learning all that information into our internal decision making and used that to try to find: these are the optimal places that we should run this study.
And then even while we were running this study, we were saying, how can we continue to optimize and do better? You know, we built real time analytics into our studies enrollment. So as patients or subjects enrolled into the study, were treated with our vaccine, we are monitoring the diversity of this: the age, the gender, and racial diversity to ensure that the final makeup of this study, when all said and done was representative of the US.
We got, I wanna say, maybe 80% of the way through the study. And we realized, look, we are not gonna meet our, our objectives because the level of volunteers aren’t quite what we wanted. And so we made the, the really difficult decision to say, look, we need a throttle, some areas of the country and focus on outreach in different areas to get the right makeup so that the study was representative.
All told, it was about a year from when we, you know, started this journey on Covid to when we got the emergency use authorization for the vaccine—which again is really unprecedented for something that usually takes many years. And I’ll say for myself personally, it was just such an amazing kind of emotional moment of, you know, I joined the company almost eight years earlier, not thinking necessarily I would ever use one of our own medicines because we weren’t even doing vaccines at the time. But to have that injected in my arm and for my family to get it for my friends and everyone else to, to see that benefit and for so many other people in the world, was just an amazing moment for us.
White House launching $5 billion program to speed coronavirus vaccines
‘Project Next Gen’ would succeed ‘Operation Warp Speed’ with a mission to develop next-generation vaccines and therapies
https://www.washingtonpost.com/health/2023/04/10/operation-warp-speed-successor-project-nextgen/
Key parts of the new initiative are not yet finalized. The White House is still considering candidates to lead the program, officials said. The vetting process has been complicated by Democrats’ desire to avoid questions of conflicts of interest that dogged Operation Warp Speed, after Trump officials selected Moncef Slaoui, a pharmaceutical industry executive with significant stock holdings, to lead that program. That decision had prompted criticism from Democrats although health officials praised Slaoui’s knowledge of the industry and credited his successful bets on vaccine candidates from Pfizer-BioNTech and Moderna.
Slaoui, 61, had spent 30 years at GSK, overseeing vaccine development at that pharmaceutical giant.
https://www.cnbc.com/2021/03/24/moncef-slaoui-fired-from-galvani-bioelectronics-board-of-directors-over-sexual-harassment-allegations.html
At the time he was appointed, he was on the board of Moderna. Slaoui resigned from Moderna and sold his shares in the company, whose Covid vaccine was the second to receive emergency use authorization in the United States.
Better luck next time, with ‘Artificial Intelligence’ …
annie says
‘Quickly Clean’ …
Artificial Intelligence, a Major Factor Behind Pfizer’s US$900M Profit
https://www.analyticsinsight.net/artificial-intelligence-a-major-factor-behind-pfizers-us900m-profit/#:~:text=Pfizer%20took%20artificial%20intelligence%20as%20a%20core%20technology,conducted%20a%20hackathon%20to%20choose%20its%20right%20mate.
AI-powered tool to quickly clean vaccine clinical data
Pfizer took artificial intelligence as a core technology to power its Covid-19 vaccine motives.
The disruptive trend was the main reason how the pharmaceutical company managed to roll out its vaccine in less than a year. Pfizer made a number of partnerships with digital players and even conducted a hackathon to choose its right mate. Generally, vaccine making and clinical trials are a lengthy process that takes months, and sometimes, years to complete. But the scientists working in Pfizer raced to develop the Covid-19 vaccine due to its immediate need and deteriorating global situation. During the vaccine rollout process, the pharmaceutical company used artificial intelligence in many phases of vaccine making and trials. For example, it usually takes more than 30 days to post the trial phase for patient data to be cleaned up, so scientists can then analyze the results. The manual process involves data scientists to go through datasets to check for coding errors and other inconsistencies that naturally occur when collecting tens of millions of data points.
Fortunately, technology reduced the workload seamlessly. A new machine learning tool called ‘Smart Data Query (SDQ)’ performed analysis and made the data available in just 22 hours after meeting the primary efficacy case counts. The machine learning tool also ensured data quality throughout the trial…
Boots are on the ground…
annie says
Starship – Enterprise…
– Elon rocks the AI Establishment
5.4.3.2.1 – and we have ‘lift off’
Dr. Augusto Germán Roux Retweeted
APOCTOZ
@Apoctoz
·
Apr 13
BBC misinfo…
https://twitter.com/Apoctoz/status/1646293847482277896
Elon Musk
@elonmusk
This time with video & better audio
https://twitter.com/elonmusk/status/1646187123077447680
Musk plans AI creation to counter ‘politically correct’ ChatGPT
https://www.msn.com/en-gb/money/technology/musk-plans-ai-creation-to-counter-politically-correct-chatgpt/ar-AA19Zqs9?ocid=msedgdhp&pc=U531&cvid=36042cb0de504b26f5c03c91f3032f79&ei=30
Elon Musk
@elonmusk
·
Apr 15
The BBC interview last week was exceptional in illustrating why you cannot rely on the media for truth
https://twitter.com/RWMaloneMD/status/1647631058635038723
“Vaccines off the hook” …
annie says
News18…
Matthew Herper
@matthewherper
Perfect summary of the worst outcome of AI and the odds it happens.
Paul Graham
@paulg
·
Apr 18
Replying to @sudan_shoe
We all die and I have no idea respectively.
‘We’re not antivaxxers… we have lost loved ones’: Widower of BBC presenter who died from Covid-19 vaccine complications launches legal action against AstraZeneca on behalf of 75 people whose ‘relatives passed away or suffered jab-related injuries’
https://www.dailymail.co.uk/news/article-11959425/Widower-BBC-presenter-died-jab-no-alternative-file-suit-against-AstraZeneca.html
Doctor’s death due to AstraZeneca Covid vaccine reaction – inquest
https://www.bbc.co.uk/news/uk-england-london-65321937
‘Healthy’ doctor, 32, died after rare severe reaction to AstraZeneca Covid jab
https://www.msn.com/en-gb/health/other/healthy-doctor-32-died-after-rare-severe-reaction-to-astrazeneca-covid-jab/ar-AA1a3h6D?ocid=msedgdhp&pc=U531&cvid=c96eea0e45534683ba0a8d0f85a7999a&ei=12
BBC congratulates itself for ‘brilliant’ Musk scoop despite Covid vaccine row
News18
https://www.msn.com/en-gb/entertainment/celebrity/bbc-congratulates-itself-for-brilliant-musk-scoop-despite-covid-vaccine-row/ar-AA19Pn9L?ocid=msedgdhp&pc=U531&cvid=71d70d665c854fdcbb09e2024e692dba&ei=14
Dr Clare Craig (not one of her impersonators)
@ClareCraigPath
·
2h
This needs sharing…
report on UK medicines regulator just out
https://perseus.org.uk/wp-content/uploads/2023/04/Perseus_MHRA_Main-Report-1-1.pdf
The serious shortcomings identified raise grave concerns about the ability of MHRA to fulfil its statutory duty to protect the public from harm, by properly regulating the safety and effectiveness of medicines in the UK. Given the level of reported Covid-19 vaccine injuries and the excess deaths across all age groups, these products must be paused while they are properly investigated, and a full independent inquiry launched into MHRA’s regulatory processes and performance.
‘Scoops’ are rare, and far between
News18
“Netizens laugh at the plight of BBC journalist” …