Google Launches Fuchsia
Google launches its third major operating system, Fuchsia

Google is officially rolling out a new operating system, called Fuchsia, to consumers. The release is a bit hard to believe at this point, but Google confirmed the news to 9to5Google, and several members of the Fuchsia team have confirmed it on Twitter. The official launch date was apparently yesterday. Fuchsia is certainly getting a quiet, anti-climactic release, as it’s only being made available to one device, the Google Home Hub, aka the first-generation Nest Hub. There are no expected changes to the UI or functionality of the Home Hub, but Fuchsia is out there. Apparently, Google simply wants to prove out the OS in a consumer environment.

Fuchsia’s one launch device was originally called the Google Home Hub and is a 7-inch smart display that responds to Google Assistant commands. It came out in 2018. The device was renamed the “Nest Hub” in 2019, and it’s only this first-generation device, not the second-generation Nest Hub or Nest Hub Max, that is getting Fuchsia. The Home Hub’s OS has always been an odd duck. When the device was released, Google was pitching a smart display hardware ecosystem to partners based on Android Things, a now-defunct Internet-of-things/kiosk OS. Instead of following the recommendations it gave to hardware partners, Google loaded the Home Hub with its in-house Google Cast Platform instead—and then undercut all its partners on.

Fuchsia has long been a secretive project. We first saw the OS as a pre-alpha smartphone UI that was ported to Android in 2017. In 2018, we got the OS running natively on a Pixelbook. After that, the Fuchsia team stopped doing its work in the open and stripped all UI work out of the public repository.
There’s no blog post or any fanfare at all to mark Fuchsia’s launch. Google’s I/O conference happened last week, and the company didn’t make a peep about Fuchsia there, either. Really, this ultra-quiet, invisible release is the most “Fuchsia” launch possible.

Fuchsia is something very rare in the world of tech: it’s a built-from-scratch operating system that isn’t based on Linux. Fuchsia uses a microkernel called “Zircon” that Google developed in house. Creating an operating system entirely from scratch and bringing it all the way to production sounds like a difficult task, but Google managed to do exactly that over the past six years. Fuchsia’s primary app-development language is Flutter, a cross-platform UI toolkit from Google. Flutter runs on Android, iOS, and the web, so writing Flutter apps today for existing platforms means you’re also writing Fuchsia apps for tomorrow.

The Nest Hub’s switch to Fuchsia is kind of interesting because of how invisible it should be. It will be the first test of this Fuchsia’s future-facing Flutter app support—the Google smart display interface is written in Flutter, so Google can take the existing interface, rip out all the Google Cast guts underneath, and plop the exact same interface code down on top of Fuchsia. Google watchers have long speculated that this was the plan all along. Rather than having a disruptive OS switch, Google could just get coders to write in Flutter and then it could seamlessly swap out the operating system.

So, unless we get lucky, don’t expect a dramatic hands-on post of Fuchsia running on the Nest Hub. It’s likely that there isn’t currently much to see or do with the new operating system, and that’s exactly how Google wants it. Fuchsia is more than just a smart-display operating system, though. An old Bloomberg report from 2018 has absolutely nailed the timing of Fuchsia so far, saying that Google wanted to first ship the OS on connected home devices “within three years”—the report turns three years old in July. The report also laid out the next steps for Fuchsia, including an ambitious expansion to smartphones and laptops by 2023.
Taking over the Nest Hub is one thing—no other team at Google really has a vested interest in the Google Cast OS (you could actually argue that the Cast OS is on the way out, as the latest Chromecast is switching to Android). Moving the OS onto smartphones and laptops is an entirely different thing, though, since the Fuchsia team would crash into the Android and Chrome OS divisions. Now you’re getting into politics.

Evolving to a more equitable AI

The pandemic that has raged across the globe over the past year has shone a cold, hard light on many things—the varied levels of preparedness to respond; collective attitudes toward health, technology, and science; and vast financial and social inequities. As the world continues to navigate the covid-19 health crisis, and some places even begin a gradual return to work, school, travel, and recreation, it’s critical to resolve the competing priorities of protecting the public’s health equitably while ensuring privacy.

The extended crisis has led to rapid change in work and social behavior, as well as an increased reliance on technology. It’s now more critical than ever that companies, governments, and society exercise caution in applying technology and handling personal information. The expanded and rapid adoption of artificial intelligence (AI) demonstrates how adaptive technologies are prone to intersect with humans and social institutions in potentially risky or inequitable ways.

“Our relationship with technology as a whole will have shifted dramatically post-pandemic,” says Yoav Schlesinger, principal of the ethical AI practice at Salesforce. “There will be a negotiation process between people, businesses, government, and technology; how their data flows between all of those parties will get renegotiated in a new social data contract.”

AI in action

As the covid-19 crisis began to unfold in early 2020, scientists looked to AI to support a variety of medical uses, such as identifying potential drug candidates for vaccines or treatment, helping detect potential covid-19 symptoms, and allocating scarce resources like intensive-care-unit beds and ventilators. Specifically, they leaned on the analytical power of AI-augmented systems to develop cutting-edge vaccines and treatments.

While advanced data analytics tools can help extract insights from a massive amount of data, the result has not always been more equitable outcomes. In fact, AI-driven tools and the data sets they work with can perpetuate inherent bias or systemic inequity. Throughout the pandemic, agencies like the Centers for Disease Control and Prevention and the World Health Organization have gathered tremendous amounts of data, but the data doesn’t necessarily accurately represent populations that have been disproportionately and negatively affected—including black, brown, and indigenous people—nor do some of the diagnostic advances they’ve made, says Schlesinger.

For example, biometric wearables like Fitbit or Apple Watch demonstrate promise in their ability to detect potential covid-19 symptoms, such as changes in temperature or oxygen saturation. Yet those analyses rely on often flawed or limited data sets and can introduce bias or unfairness that disproportionately affect vulnerable people and communities.

“There is some research that shows the green LED light has a more difficult time reading pulse and oxygen saturation on darker skin tones,” says Schlesinger, referring to the semiconductor light source. “So it might not do an equally good job at catching covid symptoms for those with black and brown skin.”

AI has shown greater efficacy in helping analyze enormous data sets. A team at the Viterbi School of Engineering at the University of Southern California developed an AI framework to help analyze covid-19 vaccine candidates. After identifying 26 potential candidates, it narrowed the field to 11 that were most likely to succeed. The data source for the analysis was the Immune Epitope Database, which includes more than 600,000 contagion determinants arising from more than 3,600 species.

Other researchers from Viterbi are applying AI to decipher cultural codes more accurately and better understand the social norms that guide ethnic and racial group behavior. That can have a significant impact on how a certain population fares during a crisis like the pandemic, owing to religious ceremonies, traditions, and other social mores that can facilitate viral spread.

Lead scientists Kristina Lerman and Fred Morstatter have based their research on Moral Foundations Theory, which describes the “intuitive ethics” that form a culture’s moral constructs, such as caring, fairness, loyalty, and authority, helping inform individual and group behavior.

“Our goal is to develop a framework that allows us to understand the dynamics that drive the decision-making process of a culture at a deeper level,” says Morstatter in a report released by USC. “And by doing so, we generate more culturally informed forecasts.”

The research also examines how to deploy AI in an ethical and fair way. “Most people, but not all, are interested in making the world a better place,” says Schlesinger. “Now we have to go to the next level—what goals do we want to achieve, and what outcomes would we like to see? How will we measure success, and what will it look like?”

Assuaging ethical concerns

It’s critical to interrogate the assumptions about collected data and AI processes, Schlesinger says. “We talk about achieving fairness through awareness. At every step of the process, you’re making value judgments or assumptions that will weight your outcomes in a particular direction,” he says. “That is the fundamental challenge of building ethical AI, which is to look at all the places where humans are biased.”

Part of that challenge is performing a critical examination of the data sets that inform AI systems. It’s essential to understand the data sources and the composition of the data, and to answer such questions as: How is the data made up? Does it encompass a diverse array of stakeholders? What is the best way to deploy that data into a model to minimize bias and maximize fairness?

As people go back to work, employers may now be using sensing technologies with AI built in, including thermal cameras to detect high temperatures; audio sensors to detect coughs or raised voices, which contribute to the spread of respiratory droplets; and video streams to monitor hand-washing procedures, physical distancing regulations, and mask requirements.

Such monitoring and analysis systems not only have technical-accuracy challenges but pose core risks to human rights, privacy, security, and trust. The impetus for increased surveillance has been a troubling side effect of the pandemic. Government agencies have used surveillance-camera footage, smartphone location data, credit card purchase records, and even passive temperature scans in crowded public areas like airports to help trace movements of people who may have contracted or been exposed to covid-19 and establish virus transmission chains.

“The first question that needs to be answered is not just can we do this—but should we?” says Schlesinger. “Scanning individuals for their biometric data without their consent raises ethical concerns, even if it’s positioned as a benefit for the greater good. We should have a robust conversation as a society about whether there is good reason to implement these technologies in the first place.”

What the future looks like

As society returns to something approaching normal, it’s time to fundamentally re-evaluate the relationship with data and establish new norms for collecting data, as well as the appropriate use—and potential misuse—of data. When building and deploying AI, technologists will continue to make those necessary assumptions about data and the processes, but the underpinnings of that data should be questioned. Is the data legitimately sourced? Who assembled it? What assumptions is it based on? Is it accurately presented? How can citizens’ and consumers’ privacy be preserved?

As AI is more widely deployed, it’s essential to consider how to also engender trust. Using AI to augment human decision-making, and not entirely replace human input, is one approach.

“There will be more questions about the role AI should play in society, its relationship with human beings, and what are appropriate tasks for humans and what are appropriate tasks for an AI,” says Schlesinger. “There are certain areas where AI’s capabilities and its ability to augment human capabilities will accelerate our trust and reliance. In places where AI doesn’t replace humans, but augments their efforts, that is the next horizon.”

There will always be situations in which a human needs to be involved in the decision-making. “In regulated industries, for example, like health care, banking, and finance, there needs to be a human in the loop in order to maintain compliance,” says Schlesinger. “You can’t just deploy AI to make care decisions without a clinician’s input. As much as we would love to believe AI is capable of doing that, AI doesn’t have empathy yet, and probably never will.”

It’s critical for data collected and created by AI to not exacerbate but minimize inequity. There must be a balance between finding ways for AI to help accelerate human and social progress, promoting equitable actions and responses, and simply recognizing that certain problems will require human solutions.

This content was produced by Insights, the custom content arm of MIT Technology Review. It was not written by MIT Technology Review’s editorial staff.

This post was originally published on this site

Embracing the rapid pace of AI

In a recent survey, “2021 Thriving in an AI World,” KPMG found that across every industry—manufacturing to technology to retail—the adoption of artificial intelligence (AI) is increasing year over year. Part of the reason is digital transformation is moving faster, which helps companies start to move exponentially faster. But, as Cliff Justice, US leader for enterprise innovation at KPMG posits, “Covid-19 has accelerated the pace of digital in many ways, across many types of technologies.” Justice continues, “This is where we are starting to experience such a rapid pace of exponential change that it’s very difficult for most people to understand the progress.” But understand it they must because “artificial intelligence is evolving at a very rapid pace.”

Justice challenges us to think about AI in a different way, “more like a relationship with technology, as opposed to a tool that we program,” because he says, “AI is something that evolves and learns and develops the more it gets exposed to humans.” If your business is a laggard in AI adoption, Justice has some cautious encouragement, “[the] AI-centric world is going to accelerate everything digital has to offer.”

Business Lab is hosted by Laurel Ruma, editorial director of Insights, the custom publishing division of MIT Technology Review. The show is a production of MIT Technology Review, with production help from Collective Next.

This podcast episode was produced in association with KPMG.

Show notes and links

“2021 Thriving in an AI World,” KPMG

Full transcript

Laurel Ruma: From MIT Technology Review, I’m Laurel Ruma, and this is Business Lab, the show that helps business leaders make sense of new technologies coming out of the lab and into the marketplace.

Our topic today is the rate of artificial intelligence adoption. It’s increasing, and fast. A new study from KPMG shows that it’s accelerating in specific industries like industrial manufacturing, financial services, and tech. But what happens when you hit the gas pedal but haven’t secured everything else? Are you uneasy about the rate of AI adoption in your enterprise?

Two words for you: covid-19 whiplash.

My guest is Cliff Justice, who is the US leader for enterprise innovation for KPMG. He and his group focus on identifying, developing, and deploying the next generation of technologies, services, and solutions for KPMG and its clients. Cliff is a former entrepreneur and is a recognized authority in global sourcing, emerging technology such as AI, intelligent automation, and enterprise transformation. This episode of Business Lab is produced in association with KPMG. Cliff, thank you for joining me on Business Lab.

Cliff Justice: It’s great to be here. Thanks for having me.

Laurel: So, we’re about to take a look at KPMG’s survey results for its “2021 Thriving in an AI World” report, which looks across seven industries. Why did KPMG repeat that survey for this year? What did you aim to achieve with this research?

Cliff: Well, artificial intelligence is evolving at a very rapid pace. When we first started covering and investing in artificial intelligence probably seven years ago, it was at a very nascent form. There were not very many use cases. Many of the use cases were based on natural language processing. About 10 years ago was when the first public use case of artificial intelligence made the headlines with IBM Watson winning Jeopardy. Since then, you’ve seen a very, very rapid progression. And this whole field is evolving at an exponential pace. So where we are today is very different than where we were a year or two ago.

Laurel: It does seem like just yesterday that IBM was announcing Watson, and the exponential growth of artificial intelligence is everywhere, in our cars, on our phones. We’re definitely seeing it in more places than just this one kind of research case of it. One of the headlines from the research is that there’s a perception that AI might be moving too fast for the comfort of some decision-makers in their respective industries. What does too fast look like? Is this due to covid-19 whiplash?

Cliff: It’s not due to covid whiplash necessarily. The covid environment has accelerated the pace of digital in many ways, across many types of technologies. This is where we are starting to experience such a rapid pace of exponential change that it’s very difficult for most people to understand the progress. For any of us, even myself who works in this field, it’s very difficult to understand the progress and the pace of change. And getting an enterprise ready—getting the people, the process, the enterprise systems, the risk, the cyber protections prepared for a world that is powered more and more by artificial intelligence—it’s difficult in normal circumstances. But when you do combine the digital acceleration and adoption that’s taking place as a result of covid, along with the exponential development and evolution of artificial intelligence, it’s hard to understand the opportunities and threats that are posed to an organization.

Even if one could fully wrap their head around the progress of artificial intelligence and the potential of artificial intelligence, changing an organization and changing the mindset and the culture in a way to adopt and benefit from the opportunities that artificial intelligence poses and also protect against the threats take some time. So, it creates a level of anxiety and caution which is, in my view, well justified.

Laurel: So, speaking of that caution or planning needed to deploy AI, in a previous discussion at MIT Technologies Review’s EmTech Conference in 2019, you said that companies needed to “rethink their ecosystem when deploying AI”, meaning partners, vendors, and the rest of their company, to get everybody to come up to speed. At the time, you mentioned that would be the real challenge. Is that still true? Or do you think now that everything is progressing so quickly, that’s the discomfort that some executives may be feeling?

Cliff: Well, that’s true. It is still true. The ecosystem that got you to a level in more of an analog-centric world is going to be very different in a more AI-centric world. That AI-centric world is going to accelerate everything digital has to offer. What I mean by digital are the new ways of working—the digital business models, the new ways of developing and evolving commerce, the ways we interact and exchange ideas with customers and with colleagues and coworkers. All of these are becoming much more digital-centric, and then artificial intelligence becomes one of the mechanisms that evolves and progresses the way we work and the way we interact. And it becomes a little more like a relationship with technology, as opposed to a tool that we program because AI is something that evolves and learns and develops the more it gets exposed to humans.

Now that we have much more human life-perceptive capabilities, thanks to the evolution of deep learning, (so by that, today, I mean more computer vision), technology is able to take on much more of the world than we were before. So understanding what technology, what AI, all of the capabilities that AI can bring and enhance and augment human capabilities is critical. Reestablishing and redeveloping the ecosystem around your business and around your enterprise is important. I think the bigger and more long-term issue though is culture, and it’s the culture of the enterprise that you’re responsible for, that one’s responsible for. But it’s also harnessing the culture, the external culture, the adoption, and the way you work with your customers, your vendors, suppliers, regulators, and external stakeholders. The mindset evolution is not equal in all of those stakeholder groups. And depending on the industry that you’re operating in, it could be very unequal in terms of the level of adoption, the level of understanding, the ability, and the comfort to work with technology. And as that technology becomes more human-like, and we’re seeing that in virtual assistants and with those types of technologies, it’s going to be a bigger chasm to cross.

Laurel: I really like that phrasing of thinking of AI as a relationship with technology versus a tool, because that really does state your intentions when you’re entering this new world, this new relationship, and that you’re accepting that constant change. Speaking of the survey and various industries, some of the industries saw a significant increase in AI deployment like financial, retail, and tech. But here was it that digital transformation need or covid, or perhaps other factors that really drove that increase?

Cliff: Well, covid has had an acceleration impact across the board. Things that were in motion—whether these were adoption of digital technologies or growth or a change in consumer behavior—all of those trends that were in place before covid accelerated them. And that includes business models that were on the decline. We saw the trends that were happening in the malls. That’s just accelerated. We’ve seen the adoption of technology that’s accelerated. There are industries that covid has less of an effect on, not a zero effect, but less of an effect. Banking, financial services are less affected by covid than retail, hospitality, travel, logistics. Covid has really accelerated the change that’s occurring in those industries.

AI, separate from covid, has a material impact across all of these. And as our survey said, industrial manufacturing, the use of robotics, the use of computer vision, artificial intelligence to speed productivity, and improved efficiency have really begun to become mainstream and at scale in industrial manufacturing. Same thing with financial services, consumer interaction has been improved with artificial intelligence in those areas. Technology, not surprisingly, has fully adopted AI or pretty close to fully adopted AI. And then we’ve seen a dramatic increase in retail as a result of AI. So online shopping, the ability to predict consumer demand has been a strong use case for AI in those industries.

Laurel: So, the laggards though, laggard industries were healthcare and life sciences at only, I say only, a 37% increase in adoption from last year’s survey. That’s still a great number. But do you think that’s because fighting covid was the priority or perhaps because they are regulated industries, or there was another reason?

Cliff: Regulation is a common theme across those laggards. You have government, you have life sciences, healthcare. Financial services, though, is regulated too, and they’re a large adopter, so it can’t be the only thing. I think the hypothesis around covid is probably more plausible because the focus in life sciences has been getting the vaccine out. Even though from our point of view and from what we see, government is a massive adopter. Just in terms of the potential within government, it’s still behind. But the sheer numbers and the sheer amount of activity that’s taking place in government when you compare it to private enterprise is still pretty impressive. It’s just that you’re dealing with such a large-scale change and a lot more red tape and bureaucracy to make that change within a government enterprise.

Laurel: For sure. You mentioned earlier the industrial manufacturing sector, and that sector saw 72% of business leaders were influenced by the pandemic to speed AI adoption. What does that actually mean for consumers in that industry, as well as that sector as a whole?

Cliff: When I look at these numbers, there’s not going to be an industry that is not affected by AI. The industries that are going to adopt it sooner and more rapidly or have an impact as a result of the pandemic, that is almost all been driven by remote work, the inability to get resources to a location, the impetus to drive automation, and AI being one of the foundational elements of automation. Because if you look at other parts of the survey where we ask, “Where are the biggest benefits?” it’s going to be found in efficiency and productivity. That’s fairly consistent across all industries when you look at where AI is being applied. So automation, productivity, predictive analytics, all of these areas are being driven by these themes around productivity. The use cases are different based on the industry, but the needs are very similar. The overarching themes and the overarching needs are very similar. You had some industries that were just impacted by the pandemic differently.

Laurel: Excitingly, maybe a difference in industrial manufacturing though, as you mentioned, are robotics. So a bit of our hardware play versus always software.

Cliff: Right. Yeah, in industrial manufacturing, you’re seeing a retooling of factories. You’re seeing what some people call the “Tesla effect,” where there is a focus on the transformation and the automation of factories—where building the factory is almost as important as the product itself. There’s a lot of debate and a lot of discussion in that sector around how much to automate, and is there too much automation? I think in some of these public events where you’ve seen a rapid ramp-up in production where automation was used, you’ve seen some backing off of that as well. Too much technology can actually have counterproductive consequences and impact because there has to be human involvement in decision-making and the technology just isn’t there yet. So, a lot of changes happening in that space. We’re seeing a lot of evolution, a lot of new types of technologies. Deep learning is allowing more computer vision, more intelligent automation to take place in the manufacturing process within the factories.

Laurel: Speaking of keeping humans involved in these choices and ideas and technologies, strong cybersecurity is a challenge, really, for everybody, right? But the bad guys are increasingly using AI against companies and enterprises, and your only response and defense is more AI. Do you see cybersecurity specifically being an area that executives across the board accelerate spending for?

Cliff: Well, you’re exactly right, cybersecurity is one of the biggest threats as technology advances, whether it’s AI-powered by classical computing or five or 10 years down the road when we have quantum computing made available to governments or to corporations. The security risks are going to continue to accelerate. AI is certainly an offense, but it’s a defense as well. So, predictive analytics using AI to predict threats, to defend against threats that are posed by AI, which are increasing the sophistication of penetration, phishing, and other ways to compromise the system. These technologies are sort of in an arms race between, as you said, the good guys and the bad guys. There’s no end in sight to that as we start to move into an era of real change, which is going to be underpinned by quantum computing in the future. This will only accelerate because you will need a new type of post-quantum cryptography to defend against the threats that quantum computers could pose to a security organization.

Laurel: It’s absolutely amazing how fast, right? As we were saying, exponential growth especially with quantum computing, perhaps around the corner, five, 10 years, that sounds about right. The research though, does come back and say that a lot of respondents think their companies should have some kind of AI ethics policy and code of conduct, but not many do, not many do. So those that do are smaller companies. Do you think it’s just a matter of time before everyone does or it’s a board requirement even to have these AI ethics policies?

Cliff: Well, we do know that this is being discussed at the regulatory level. There are significant questions around where the government should step in with regulatory measures and where self-policing AI ethics… How does your marketing organization target behavior in its customer base? And how do you leverage AI to use the psychological profiles to enable sales? There are some ethical decisions that would have to be made around that, for example. The use of facial recognition in consumer environments is well debated and discussed. But the use of AI and the ethical use of AI targeting the psychology of consumers, I think that debate has just started largely this summer with some documentaries that came out that showed how social media is using AI to target consumers with marketing products and how that can be misused and misapplied by the bad guys.

So, yeah, this is just the tip of the iceberg. What we’re seeing today is just the initial opening statements when it comes to how far should we go with AI and what are the penalties that are applied to those who go further than we should, and are those penalties regulated by the government? Are they social penalties and just exposure or are these things that we need laws and rules that have some teeth for violating these agreed-upon ethics, whatever they may be?

Laurel: It’s a bit of a push-me, pull-you situation, right? Because the technology is advancing really quickly, but societal or regulations may be a bit lagging. And at the same time, companies are not necessarily, maybe in some cases, adopting AI as quickly or are having problems staffing these AI initiatives. So, how are companies trying to keep up with talent acquisition, and should enterprises start looking, or perhaps already have, been looking at upskilling or training current employees how to use AI as a new skill?

Cliff: Yeah, these are very hard problems. If you look at the study and dive in, you’ll see the difference between large companies and small companies. I mean, the ability to attract talent that has gone through years and years of training in advanced analytics, computer engineering, deep learning, machine learning, and understanding the complexities and the nuances of training the weights and biases of complex, multilevel, deep learning algorithms—that talent is not easy to come by. It’s very difficult to take a classical computer engineer and retrain them in that type of statistical-based artificial intelligence, where you’re having to really work with training these complex neural networks in order to achieve the goals of the company.

We’re seeing the tech companies offer these services on the cloud, and that is a way to access artificial intelligence and access some of these tools is through the subscription to APIs, application program interfaces, and applying those APIs to your platforms and technologies. But to really have a competitive advantage, you need to be able to manipulate and develop and control the data that goes into training these algorithms. In today’s world, artificial intelligence is very, very data hungry, and it requires massive amounts of data to get accurate and high-quality output. That data accrues to the largest companies and that’s reflected in their valuation. So, we see who those companies are. A lot of that value is because of the data that they have access to. And the products that they’re able to produce are based on much of that data. Those products many times are powered by artificial intelligence.

Laurel: So back to the survey, one last data point here, 60% of respondents say that AI is at least moderately to fully functional in their organization. Compared to 10 years ago, that does seem like real progress for AI. But not everyone is there yet. What are some steps that enterprises can take to become more fully functional with AI?

Cliff: This is where I go back to what I said last year, which is to re-evaluate your ecosystem. Who are your partners? Who is bringing these capabilities into your business? Understand what your options are relative to the technology providers that are giving you access to AI. Not every company is going to be able to just go hire an AI expert and have AI. These are technologies, like I said, they’re difficult to develop. They’re difficult to maintain. They’re evolving at a lightning-fast exponential pace. So, the conversations that we would have had six months or a year ago would be different now, just because of the pace of change that’s taking place in this environment. The recalcitrance is low to change in AI. And so, it’s moving faster than Moore’s Law. It is accelerating as fast as the data allows it. The algorithms themselves have been around for years. It’s the ability to capture and use the data that is driving the AI. So, partnering with these capabilities, these technology companies that have access to data that’s relevant to your industry is a critical element to being successful.

Laurel: When you do talk to executives about how to be successful with AI, how do advise them if they are behind the competitors and peers in deploying AI?

Cliff: Well, we do surveys like this. We do benchmarks. We harness benchmarks that are out there in other areas and other domains. We look at the pace of change and the relative benefit to that specific industry, and even more narrow than that, the function or the activity within that industry and that business. AI has not infiltrated every single area yet. It’s on the way to doing that, but there are areas in customer service, the GNA, the back-office components of an organization, manufacturing, the analytics, the insights, the forecasting, all of that, AI has a strong foothold, so continuing to evolve that. But then there are elements in product design, engineering, other aspects of design that AI is moving into that there’s barely a level playing field right now.

So, it’s uneven. It’s very advanced in some areas, it’s not as advanced in others. I would also say that the perception that will come out in the survey of generalists in these areas may not consider some of the more advanced artificial intelligence capabilities that might be six months, a year, or two years down the road. But those capabilities are evolving very quickly and will be moving into these industries quickly. I would also look at the startup ecosystem as well. The startups are evolving quickly. The technologies that a startup is using and introducing into new industries to disrupt those industries are not necessarily being considered by the more established companies who have existing operating models and existing business models. So, a startup may be using AI and data to totally transform how an industry consumes a product or a service.

Laurel: That’s good advice as always. Cliff, thank you so much for joining us today in what has been a great conversation on the Business Lab.

Cliff: My pleasure. It’s great talking to you.

Laurel: That was Cliff Justice, the US leader for enterprise innovation for KPMG, who I spoke with from Cambridge, Massachusetts, the home of MIT and MIT Technology Review, overlooking the Charles River.

That’s it for this episode of Business Lab. I’m your host, Laurel Ruma. I’m the Director of Insights, the custom publishing division of MIT Technology Review. We were founded in 1899 at the Massachusetts Institute of Technology. And you can find us in print, on the web, and events each year around the world. For more information about us and the show, please check out our website at technologyreview.com.

This show is available wherever you get your podcasts.

If you enjoy this episode, we hope you’ll take a moment to rate and review us. Business Lab is a production of MIT Technology Review. This episode was produced by Collective Next. Thanks for listening.

This podcast episode was produced by Insights, the custom content arm of MIT Technology Review. It was not produced by MIT Technology Review’s editorial staff.

This post was originally published on this site

The race to understand the exhilarating, dangerous world of language AI

On May 18, Google CEO Sundar Pichai announced an impressive new tool: an AI system called LaMDA that can chat to users about any subject.

To start, Google plans to integrate LaMDA into its main search portal, its voice assistant, and Workplace, its collection of cloud-based work software that includes Gmail, Docs, and Drive. But the eventual goal, said Pichai, is to create a conversational interface that allows people to retrieve any kind of information—text, visual, audio—across all Google’s products just by asking.

LaMDA’s rollout signals yet another way in which language technologies are becoming enmeshed in our day-to-day lives. But Google’s flashy presentation belied the ethical debate that now surrounds such cutting-edge systems. LaMDA is what’s known as a large language model (LLM)—a deep-learning algorithm trained on enormous amounts of text data.

Studies have already shown how racist, sexist, and abusive ideas are embedded in these models. They associate categories like doctors with men and nurses with women; good words with white people and bad ones with Black people. Probe them with the right prompts, and they also begin to encourage things like genocide, self-harm, and child sexual abuse. Because of their size, they have a shockingly high carbon footprint. Because of their fluency, they easily confuse people into thinking a human wrote their outputs, which experts warn could enable the mass production of misinformation.

In December, Google ousted its ethical AI co-lead Timnit Gebru after she refused to retract a paper that made many of these points. A few months later, after wide-scale denunciation of what an open letter from Google employees called the company’s “unprecedented research censorship,” it fired Gebru’s coauthor and co-lead Margaret Mitchell as well.

It’s not just Google that is deploying this technology. The highest-profile language models so far have been OpenAI’s GPT-2 and GPT-3, which spew remarkably convincing passages of text and can even be repurposed to finish off music compositions and computer code. Microsoft now exclusively licenses GPT-3 to incorporate into yet-unannounced products. Facebook has developed its own LLMs for translation and content moderation. And startups are creating dozens of products and services based on the tech giants’ models. Soon enough, all of our digital interactions—when we email, search, or post on social media—will be filtered through LLMs.

Unfortunately, very little research is being done to understand how the flaws of this technology could affect people in real-world applications, or to figure out how to design better LLMs that mitigate these challenges. As Google underscored in its treatment of Gebru and Mitchell, the few companies rich enough to train and maintain LLMs have a heavy financial interest in declining to examine them carefully. In other words, LLMs are increasingly being integrated into the linguistic infrastructure of the internet atop shaky scientific foundations.

More than 500 researchers around the world are now racing to learn more about the capabilities and limitations of these models. Working together under the BigScience project led by Huggingface, a startup that takes an “open science” approach to understanding natural-language processing (NLP), they seek to build an open-source LLM that will serve as a shared resource for the scientific community. The goal is to generate as much scholarship as possible within a single focused year. Their central question: How and when should LLMs be developed and deployed to reap their benefits without their harmful consequences?

“We can’t really stop this craziness around large language models, where everybody wants to train them,” says Thomas Wolf, the chief science officer at Huggingface, who is co-leading the initiative. “But what we can do is try to nudge this in a direction that is in the end more beneficial.”

Stochastic parrots

In the same month that BigScience kicked off its activities, a startup named Cohere quietly came out of stealth. Started by former Google researchers, it promises to bring LLMs to any business that wants one—with a single line of code. It has developed a technique to train and host its own model with the idle scraps of computational resources in a data center, which holds down the costs of renting out the necessary cloud space for upkeep and deployment.

Among its early clients is the startup Ada Support, a platform for building no-code customer support chatbots, which itself has clients like Facebook and Zoom. And Cohere’s investor list includes some of the biggest names in the field: computer vision pioneer Fei-Fei Li, Turing Award winner Geoffrey Hinton, and Apple’s head of AI, Ian Goodfellow.

Cohere is one of several startups and initiatives now seeking to bring LLMs to various industries. There’s also Aleph Alpha, a startup based in Germany that seeks to build a German GPT-3; an unnamed venture started by several former OpenAI researchers; and the open-source initiative Eleuther, which recently launched GPT-Neo, a free (and somewhat less powerful) reproduction of GPT-3.

But it’s the gap between what LLMs are and what they aspire to be that has concerned a growing number of researchers. LLMs are effectively the world’s most powerful autocomplete technologies. By ingesting millions of sentences, paragraphs, and even samples of dialogue, they learn the statistical patterns that govern how each of these elements should be assembled in a sensible order. This means LLMs can enhance certain activities: for example, they are good for creating more interactive and conversationally fluid chatbots that follow a well-established script. But they do not actually understand what they’re reading or saying. Many of the most advanced capabilities of LLMs today are also available only in English.

Among other things, this is what Gebru, Mitchell, and five other scientists warned about in their paper, which calls LLMs “stochastic parrots.” “Language technology can be very, very useful when it is appropriately scoped and situated and framed,” says Emily Bender, a professor of linguistics at the University of Washington and one of the coauthors of the paper. But the general-purpose nature of LLMs—and the persuasiveness of their mimicry—entices companies to use them in areas they aren’t necessarily equipped for.

In a recent keynote at one of the largest AI conferences, Gebru tied this hasty deployment of LLMs to consequences she’d experienced in her own life. Gebru was born and raised in Ethiopia, where an escalating war has ravaged the northernmost Tigray region. Ethiopia is also a country where 86 languages are spoken, nearly all of them unaccounted for in mainstream language technologies.

Despite LLMs having these linguistic deficiencies, Facebook relies heavily on them to automate its content moderation globally. When the war in Tigray first broke out in November, Gebru saw the platform flounder to get a handle on the flurry of misinformation. This is emblematic of a persistent pattern that researchers have observed in content moderation. Communities that speak languages not prioritized by Silicon Valley suffer the most hostile digital environments.

Gebru noted that this isn’t where the harm ends, either. When fake news, hate speech, and even death threats aren’t moderated out, they are then scraped as training data to build the next generation of LLMs. And those models, parroting back what they’re trained on, end up regurgitating these toxic linguistic patterns on the internet.

In many cases, researchers haven’t investigated thoroughly enough to know how this toxicity might manifest in downstream applications. But some scholarship does exist. In her 2018 book Algorithms of Oppression, Safiya Noble, an associate professor of information and African-American studies at the University of California, Los Angeles, documented how biases embedded in Google search perpetuate racism and, in extreme cases, perhaps even motivate racial violence.

“The consequences are pretty severe and significant,” she says. Google isn’t just the primary knowledge portal for average citizens. It also provides the information infrastructure for institutions, universities, and state and federal governments.

Google already uses an LLM to optimize some of its search results. With its latest announcement of LaMDA and a recent proposal it published in a preprint paper, the company has made clear it will only increase its reliance on the technology. Noble worries this could make the problems she uncovered even worse: “The fact that Google’s ethical AI team was fired for raising very important questions about the racist and sexist patterns of discrimination embedded in large language models should have been a wake-up call.”

BigScience

The BigScience project began in direct response to the growing need for scientific scrutiny of LLMs. In observing the technology’s rapid proliferation and Google’s attempted censorship of Gebru and Mitchell, Wolf and several colleagues realized it was time for the research community to take matters into its own hands.

Inspired by open scientific collaborations like CERN in particle physics, they conceived of an idea for an open-source LLM that could be used to conduct critical research independent of any company. In April of this year, the group received a grant to build it using the French government’s supercomputer.

At tech companies, LLMs are often built by only half a dozen people who have primarily technical expertise. BigScience wanted to bring in hundreds of researchers from a broad range of countries and disciplines to participate in a truly collaborative model-construction process. Wolf, who is French, first approached the French NLP community. From there, the initiative snowballed into a global operation encompassing more than 500 people.

The collaborative is now loosely organized into a dozen working groups and counting, each tackling different aspects of model development and investigation. One group will measure the model’s environmental impact, including the carbon footprint of training and running the LLM and factoring in the life-cycle costs of the supercomputer. Another will focus on developing responsible ways of sourcing the training data—seeking alternatives to simply scraping data from the web, such as transcribing historical radio archives or podcasts. The goal here is to avoid toxic language and nonconsensual collection of private information.

Other working groups are dedicated to developing and evaluating the model’s “multilinguality.” To start, BigScience has selected eight languages or language families, including English, Chinese, Arabic, Indic (including Hindi and Urdu), and Bantu (including Swahili). The plan is to work closely with every language community to map out as many of its regional dialects as possible and ensure that its distinct data privacy norms are respected. “We want people to have a say in how their data is used,” says Yacine Jernite, a Huggingface researcher.

The point is not to build a commercially viable LLM to compete with the likes of GPT-3 or LaMDA. The model will be too big and too slow to be useful to companies, says Karën Fort, an associate professor at the Sorbonne. Instead, the resource is being designed purely for research. Every data point and every modeling decision is being carefully and publicly documented, so it’s easier to analyze how all the pieces affect the model’s outcomes. “It’s not just about delivering the final product,” says Angela Fan, a Facebook researcher. “We envision every single piece of it as a delivery point, as an artifact.”

The project is undoubtedly ambitious—more globally expansive and collaborative than any the AI community has seen before. The logistics of coordinating so many researchers is itself a challenge. (In fact, there’s a working group for that, too.) What’s more, every single researcher is contributing on a volunteer basis. The grant from the French government covers only computational, not human, resources.

But researchers say the shared need that brought the community together has galvanized an impressive level of energy and momentum. Many are optimistic that by the end of the project, which will run until May of next year, they will have produced not only deeper scholarship on the limitations of LLMs but also better tools and practices for building and deploying them responsibly.

The organizers hope this will inspire more people within industry to incorporate those practices into their own LLM strategy, though they are the first to admit they are being idealistic. If anything, the sheer number of researchers involved, including many from tech giants, will help establish new norms within the NLP community.

In some ways the norms have already shifted. In response to conversations around the firing of Gebru and Mitchell, Cohere heard from several of its clients that they were worried about the technology’s safety. On its site it includes a page on its website featuring a pledge to continuously invest in technical and non-technical research to mitigate the possible harms of its model. It says it will also assemble an advisory council made up of external experts to help it create policies on the permissible use of its technologies.

“NLP is at a very important turning point,” says Fort. That’s why BigScience is exciting. It allows the community to push the research forward and provide a hopeful alternative to the status quo within industry: “It says, ‘Let’s take another pass. Let’s take it together—to figure out all the ways and all the things we can do to help society.’”

“I want NLP to help people,” she says, “not to put them down.”

Update: Cohere’s responsibility initiatives have been clarified.

This post was originally published on this site

Could the ransomware crisis force action against Russia?

What touches the American psyche more deeply than a gas shortage? If the Colonial Pipeline attack is any measure, nothing. Ransomware has been a growing problem for years, with hundreds of brazen criminal hacks against schools, hospitals, and city governments—but it took an attack that affected people’s cars for the US to really take notice.

The strike on the Colonial Pipeline may have only led to panic buying rather than genuine gas scarcity,  but it pushed the country hard enough to demand a response from the president of the United States.

On May 10, after the company had paid $4.4 million to the hackers responsible, President Biden made his argument. While there was no evidence of direct Russian government involvement in the Colonial Pipeline attack, he said, Moscow has a responsibility to deal with criminals residing within their own borders.

His statement is based on what experts have long known: that Russia is a cybercrime superpower in large part because the line between government and organized crime is deliberately hazy.

“We have a 20-year history of Russia harboring cybercriminals,” says Dmitri Alperovitch, the former CTO of cloud security company Crowdstrike and chairman at the Silverado Policy Accelerator, a technology-focused think tank in Washington, DC. “At a minimum they turn a blind eye toward cybercriminals; at a maximum they are supported, encouraged, facilitated.”

Knowing what is happening is one thing, however. What’s more difficult is working out how to change it.

Imposing consequences

Under international law, states have a responsibility not to knowingly allow their territory to be used for international crime. This most often happens in piracy, but it also applies to terrorism and organized crime. Global agreements mean that governments are obligated to shut down such criminal activity or, if they lack capability, to get assistance to do so.

Russia, however, has been known to protect criminal hackers and even co-opt them to undertake attacks on its behalf. More often, it simply tolerates and ignores the crooks as long as the country itself is not affected. That means hackers will routinely skip any computer using the Russian language, for instance, in an implicit admission of how the game is played.

Meanwhile, the Kremlin routinely strongly resists international efforts to bring the hackers to heel, simply throwing accusations back at the rest of the world—refusing to acknowledge that a problem exists, and declining to help.

On May 11, for example, shortly after Biden’s statement, Kremlin spokesman Dmitry Preskov publicly denied Russian involvement. Instead, he criticized the United States for “refusing to cooperate with us in any way to counter cyber-threats.”

The calculus for Russia is difficult to measure clearly but a few variables are striking: ransomware attacks destabilize Moscow’s adversaries, and transfer wealth to Moscow’s friends—all without much in the way of negative consequences.

Now observers are wondering if high-profile incidents like the pipeline shutdown will change the math.

“The question for the US and the West is, ‘How much are you willing to do to the Russians if they’re going to be uncooperative?’” says James Lewis, a cybersecurity expert at the Center for Strategic and International Studies. “What the West has been unwilling to do is take forceful action against Russia. How do you impose consequences when people ignore agreed-upon international norms?”

“I do think that we need to put pressure on Russia to start dealing with the cybercriminals,” Alperovitch argues. “Not just the ones directly responsible for Colonial, but the whole slew of groups that have been conducting ransomware attacks, financial fraud, and the like for two decades. Not only has Russia not done that: they’ve strenuously objected when we demand arrests of individuals and provided full evidence to the Russian law enforcement. They’ve done nothing. They’ve been completely obstructionist at the least, not helping in investigations, not conducting arrests, not holding people accountable. At a minimum, we need to demand them to take action.”

“Russia has been completely obstructionist at the least, not helping in investigations, not conducting arrests, not holding people accountable.”

Dmitri Alperovitch, Silverado Policy Accelerator

There are numerous examples of cybercriminals being deeply entangled with Russian intelligence. The enormous 2014 hack against Yahoo resulted in charges against Russian intelligence officers and cybercriminal conspirators. The hacker Evgeniy Bogachev, once the world’s most prolific bank hacker, has been linked to Russian espionage. And on the rare occasions when hackers are arrested and extradited, Russia accuses the US of “kidnapping” its citizens. The Americans counter that the Kremlin is protecting its own criminals by preventing investigation and arrest.

Bogachev, for example, has been charged by the US for creating a criminal hacking network responsible for stealing hundreds of millions of dollars through bank hacks. His current location in a resort town in southern Russia is no secret, least of all to the Russian authorities who at first cooperated with the American-led investigation against him but ultimately reneged on the deal. Like many of his contemporaries, he’s out of reach because of Moscow’s protection.

To be clear: there is no evidence that Moscow directed the Colonial Pipeline hack. What security and intelligence experts argue is that the Russian government’s long-standing tolerance of—and occasional direct relationship with—cybercriminals is at the heart of the ransomware crisis. Allowing a criminal economy to grow unchecked makes it virtually inevitable that critical infrastructure targets like hospitals and pipelines will be hit. But the reward is high and the risk so far is low, so the problem grows.

What are the options?

Just days before the pipeline was hacked, a landmark report, “Combating Ransomware,” was published by the Institute for Security and Technology. Assembled by a special task force comprising government, academia, and representatives of American technology industry’s biggest companies, it was one of the most comprehensive works ever produced about the problem. Its chief recommendation was to build a coordinated process to prioritize ransomware defense across the whole US government; the next stage, it argued, would require  a truly international effort to fight the multibillion-dollar ransomware problem.

“The previous administration didn’t think this problem was a priority,” says Phil Reiner, who led the report. “They didn’t take coordinated action. In fact, that previous administration was completely uncoordinated on cybersecurity. It’s not surprising they didn’t put together an interagency process to address this; they didn’t do that for anything.”

Today, America’s standard menu of options for responding to hacking incidents ranges from sending a nasty note or making individual indictments to state-level sanctions and offensive cyber-actions against ransomware groups.

Experts say it is important to get allies to publicly acknowledge the problems and endorse the consequences—and to be less hesitant. Biden’s public assertion that the Kremlin bears responsibility for cybercrime carried out from Russian soil could be a signal to Moscow of potential consequences if action isn’t taken, although he didn’t say what those consequences could be. The fact that the United Kingdom’s foreign minister, Dominic Raab, soon echoed the sentiment is a sign of growing international consensus.

“The preponderance of opinion is for caution, which of course the Russians know and exploit,” Lewis says. “Colonial hasn’t fully changed that, but I think we’re moving away from a timid response. We’re not changing anything, and things are getting worse.”

Action can be stymied for fear of escalation, or because cyber can take a back seat to other issues important to the Russia-US relationship, like arms control or Iran. But there are efforts under way to expand the options for action now that senior leaders from both sides of the Atlantic now clearly see ransomware as a national security threat.

This is a fundamental shift that could drive change—in theory.

“I wonder about the idea against action, because it risks making the Russians mad so they’ll do something back to us,” says Lewis. “What exactly have they not done?”

Today, the White House is actively working with international partners, the Justice Department is standing up a new ransomware task force, and the Department of Homeland Security is ramping up efforts to deal with the problem.

“This is a solvable problem,” says Reiner, who was a senior National Security Council official under Obama. “But if action isn’t taken, it’s going to get worse. You thought gas lines for a day or two were bad, but get used to it. They’re going to continue to ramp up against schools, hospitals, businesses, you name it. The ransomware actors won’t care until they face consequences.”

This post was originally published on this site

We could see federal regulation on face recognition as early as next week

On May 10, 40 advocacy groups sent an open letter demanding a permanent ban on the use of Amazon’s facial recognition software, Rekognition, by US police. The letter was addressed to Jeff Bezos and Andy Jassy, the company’s current and incoming CEOs, and came just weeks before Amazon’s year-long moratorium on sales to law enforcement was set to expire.

The letter contrasted Bezos’s and Jassy’s vocal support of Black Lives Matter campaigners during last summer’s racial justice protests after the murder of George Floyd with reporting that other Amazon products have been used by law enforcement to identify protesters.

On May 17, Amazon announced it would extend its moratorium indefinitely, joining competitors IBM and Microsoft in self-regulated purgatory. The move is a nod at the political power of the groups fighting to curb the technology—and recognition that new legislative battle grounds are starting to emerge. Many believe that substantial federal legislation is likely to come soon.

“People are exhausted”

The past year has been pivotal for face recognition, with revelations of the technology’s role in false arrests, and bans on it put in place by almost two dozen cities and seven states across the US. But the momentum has been shifting for some time.

In 2018, AI researchers published a study comparing the accuracy of commercial facial recognition software from IBM, Microsoft, and Face++. Their work found that the technology identified lighter-skinned men much more accurately than darker-skinned women; IBM’s system scored the worst, with a 34.4% difference in error rate between the two groups.

Also in 2018, the ACLU tested Amazon’s Rekognition and found that it misidentified 28 members of Congress as criminals—an error disproportionately affecting people of color. The organization wrote its own open letter to Amazon, demanding that the company ban government use of the technology, as did the Congressional Black Caucus—but Amazon made no changes.

“If we’re going to commit to racial equity in the criminal justice system … one of the simplest and clearest things you can do is end the use of facial recognition technology.”

Kate Ruane, ACLU

During the racial justice movements against police brutality last summer, however, Amazon surprised many by announcing that it was halting police use of Rekognition, with exceptions for federal law enforcement officers such as ICE. The company’s announcement said it hoped the pause “might give Congress enough time to put in place appropriate rules.”

Evan Greer is the director at Fight for the Future, a technology advocacy group that believes in the abolition of face recognition technology and says there is growing public support for it to be regulated. She says this week’s extension of the moratorium shows that “Amazon is responding to this enormous pressure that they’re receiving, not just around facial recognition,” adding, “I really give tremendous credit to the nationwide uprisings for racial justice that have happened over the last year and a half.”

“A political reality”

Although there is pressure building on large technology providers, the reality is that most law enforcement and government users don’t buy facial recognition software from companies like Amazon. So though the moratoriums and bans are welcome to advocacy groups, they don’t necessarily prevent the technologies from being used. Congress, meanwhile, has yet to pass any federal legislation on facial recognition in law enforcement, government, or commercial settings that would regulate smaller providers.

Some hope that federal legislation is soon to come, however, either through direct congressional action, a presidential executive order, or upcoming appropriation and police reform bills.

“I think best-case scenario is that Congress passes a moratorium on the use of it,” says Kate Ruane, senior legislative counsel at the ACLU. She thinks that new uses should only be permitted after more legislative work.

Several federal bills have already been proposed that would rein in access to facial recognition.

  • The Facial Recognition and Biometric Technology Moratorium Act calls for banning use of the software by any federal entities and withholding federal grant money from any state and local authorities that do not enact their own moratorium. It was proposed by four Democratic members of Congress and introduced to the Senate last year.
  • The George Floyd Justice in Policing Act would prevent the use of facial recognition in body cameras. The bill has already passed in the House and is expected to reach the Senate this coming week. President Biden has asked that the bill be passed ahead of the anniversary of George Floyd’s death on May 25.
  • The Fourth Amendment Is Not For Sale Act, a bipartisan bill introduced by 18 senators, limits the government from working with technology providers that break terms of service. In practice, it would largely prevent government access to systems that engage in web scraping, such as Clearview AI.

Mutale Nkonde, the founding CEO of AI for the People, a nonprofit that advocates for racial justice in technology, believes we are likely to see additional federal legislation by the midterm elections next year.

“I do think there is going to be federal legislation introduced that is going to govern all algorithmic systems, including facial recognition,” Nkonde says. “I think that that’s a political reality.”

Nkonde says the concept of impact assessments that evaluate technological systems on the basis of civil rights is gaining traction in policy circles on both sides of the aisle.

The ACLU is lobbying the Biden administration for an executive order, and it recently published a letter with 40 other groups asking for an immediate ban on government use of the technology.

“If we are going to commit to racial justice, if we’re going to commit to racial equity in the criminal justice system, we’re going to commit to those sorts of reforms, one of the simplest and clearest things you can do is end the use of facial recognition technology,” says Ruane.

“People are just more radical”

In the meantime, Ruane expects self-regulation to remain one of the most effective methods of preventing expanded use of facial recognition. It’s plausible that federal agencies like the Departments of Housing, Homeland Security, and Education will consider imposing rules banning the use of the technology

Nkonde is optimistic that the moratoriums will expand into bans and more permanent legislation: “I think moratoriums seemed to be what was possible prior to George Floyd being killed. After that, people are just more radical.”

But Greer cautions that for all the momentum against face recognition, legislation that focuses heavily on the racial accuracy of the systems might not solve deeper problems. “I think it would be a mistake if policymakers see accuracy as the only problem with facial recognition that needs to be addressed,” she says. “Industry would actually be very happy with a bill that, for example, says something like ‘If you’re going to sell a facial recognition system, it has to be 99% accurate on people of all different races and skin tones.’”

“Even if the bias isn’t baked into the system, you still have a biased policing system that’s now being accelerated and sort of supercharged with this technology,” she adds.

This post was originally published on this site

Why WordPress is the best CMS of (2021)

Did you know that WordPress powers more than 40% of all websites? That’s a pretty big percentage, and if you’re not on WordPress yet, you’re probably wondering, “Why?”

WordPress used to be a blogging platform, but it’s adapted well over the years and become extremely versatile, allowing users to create fully functional sites of any category. It’s also an open source software, meaning it’s completely free, redistributable, and offers unlimited validity.

Beyond the basics of what WordPress is, however, there are several features that make it great to build your website on (and totally explain why more than 40% of the internet is using it!).

Built-in SEO boosts

A lot of website traffic comes from people using search engines, such as Google, Yahoo, and Bing! The rankings on search engine result pages can bring thousands of users to a website daily. That’s where website owners play a game of tug-of-war between each other for traffic, and are concerned with SEO (Search Engine Optimization). This plays a very crucial role in attracting new users. Google and other search engines have a few predefined parameters which help them rank sites. And WordPress handles those parameters effectively.

WordPress, from the get-go, gives you an advantage with SEO, especially in regards to on-page optimizations. It takes care of many crucial cornerstone elements of a website, such as:

  • Precise HTML markup. HTML markup helps search engines understand the website’s layout and content formats more easily. And some of the latest HTML5 WordPress themes make it even more compelling for users, as well as the crawlers from these search engines.
  • Content creation efficiency. Content has become the ideal solution to present your website to search engines. With WordPress’ history with blogging, it’s made the platform more user-friendly for content creators, ranging from blogging, media, news, and loads more.
  • SEO-beneficial permalinks. Permalinks, which stands for Permanent Links, are best utilized when they contain keywords related to your content, and WordPress makes it easy to customize them. For example, “https://getflywheel.com/layout/best-cms-wordpress-2021” is a lot more beneficial to users and search engines than “https://getflywheel.com/layout/?p3-1”.
  • Image optimization. Images are crucial for your website and WordPress made sure to do their best with images as well. The built-in editor allows you to optimize images with alt tags, descriptions, captions, and further trimming.

These are just a few of the features that make WordPress special from its core. But on top of all this, there are even more you can add to it and revamp the whole CMS for your needs.


It’s easy to customize (even if you’re not a developer)

WordPress powers millions of sites around the world. But does that mean everyone building them is a trained developer? Not really. That’s due to the platform’s simple user interface, easy-to-understand options, and the functionalities present in it’s dashboard. It can effectively be used by anyone with zero coding knowledge, thanks to its power-packed WordPress themes, plugins, and tools. These help anyone create beautiful, dynamic websites.

Managing the site is even easier once it’s done and live. WordPress constantly rolls out updates for themes, better tools, and new plugins that you can install with a single click of a mouse.

With competitors like Joomla and Drupal lagging behind WordPress with 5.9% and 3.9% respectively, it shows how much of a conqueror WordPress is compared to others.

There are plenty of WordPress themes and plugins

WordPress comes packed with pre-built themes, plus designers and developers come up with new themes each and every day. The increased functionality and SEO optimization make each site dynamic and user-friendly through desktops, smartphones, and other devices. With all these choices, you can make your site look however you want. You can even find themes based on your needs or categories, like eCommerceportfolio, blog, business, you name it.

Just like the heaps of themes WordPress offers, the platform also provides plugins for all your needs, which is another reason so many people find WordPress appealing.

Plugins are an easy way to add additional functionality to a WordPress site. Some of the most popular ones include Yoast SEOJetpackW3 Total CacheWooCommerceGoogle XML sitemapGoogle Analytics, and more.

The developers behind these plugins are usually pretty active, as well, and release updates on a regular basis. These updates can give your site even more functionality, keep it up-to-date, and increase performance.

It’s true that most of the plugins would come in handy and would increase the functionality of the website. But sometimes, it gets snappy and wears down the site. To make sure it doesn’t happen, WordPress also gives you an option to disable or deactivate the plugins you’ve installed in the core any time you need, helping you to keep your site optimal at all times.

Free vs. premium WordPress themes & plugins

The amount of themes and plugins available for WordPress is immense. There are literally thousands of options to use on your site. For plugins, you can download some from the WordPress.org Plugin Directory. But, here’s the catch: there are a fair number of premium plugins for you to use, as well, and these aren’t found in that directory.

flywheel

Mobile optimization

layout by flywheel why wordpress is the best cms in 2018 mobile wordpress applications website on iphone 6 plus

With every passing year, there are more and more users actively visiting websites through their hand-held devices, but many sites still aren’t optimized for different screen ratios and fast-paced loading. Luckily, between some built-in WordPress functionality and most WordPress themes, your site will be packed with mobile features right off the bat.

From site design to image scaling, as long as you have a responsive WordPress theme, most of these important features will just naturally happen – no extra coding required. This will help both users and search engines access your site on all devices, leading to a great user experience no matter the screen size.

Google also prioritizes mobile experience for their ranking criteria, so if you want your site to show up in mobile search results, this is incredibly important to think about.

Between WordPress themes and plugins, it’s outrageously easy to create a mobile-friendly website. It’s no wonder more than 40% of websites use it!

WordPress security

Despite how many websites around the world are created with WordPress, they maintain a high level of security. Of course, there are always loopholes hackers could exploit (this is true of any CMS), but WordPress regularly releases security updates to protect your site from any vulnerabilities. And as a website owner, getting hacked is about the last thing you ever want to worry about.

By regularly updating your site to the latest WordPress version and using themes and plugins that you trust, you can greatly minimize security risks. Add a managed WordPress host like Flywheel on top of that, and you’re looking at a super secure website.

and using themes and plugins that you trust, you can greatly minimize security risks. Add a managed WordPress host like Flywheel on top of that, and you’re looking at a super secure website.

Integrate with other software tools

layout by flywheel why wordpress is the best cms in 2018 man leaning over to computer on coffee tbale with stack of books and lamp near grey couch

If you’re building a website, you probably have a goal – maybe it’s simply to showcase your work online, generate leads for your business, or streamline information for your users. No matter the goal, you’re probably using a few other apps to achieve them, like social media platforms, email marketing tools, or analytics applications.

WordPress is such a popular platform that many of the other online tools you use will have an integration for it. Whether it connects via an API or a plugin, the process to connect these tools in your workflow is super simple.


The WordPress community keeps growing

As the numbers suggested earlier, of sites with a known CMS, nearly 60% are using WordPress. The WordPress community can’t be rivaled anytime soon, and it’s only continuing to grow. With new SEO features, improved dashboard functionality, and UI updates coming in the future, the platform is only getting better and better.

“It’s a no-brainer that you should use WordPress as your CMS in 2021 (and beyond!). ”

What do you say, are you convinced? Why are you skeptical of WordPress? Or if you’re already using it, why do you love? Let me know in the comments below; let’s build a better community.


Create and launch a WordPress site in 30 minutes or less

Need to get a simple WordPress site live in as little time as possible? We have everything you need to make it happen with this go-live checklist! Follow along here and we’ll have your site living and breathing on Flywheel in approximately 30 minutes.

All photos in this article were taken by our Flywheel in-house photographer, Kimberly Bailey, and shot at Hutch. This article was originally published 1-3-2019. It was last updated 12-8-2020.

Squarespace IPO
Squarespace IPO: Everything you need to know about Squarespace

Squarespace is set to go public by a direct listing that could cause a volatile start to life as a publicly-traded company. We explain everything you need to know about the listing and the business.

When is the Squarespace IPO?

Squarespace is going public via a direct listing on the New York Stock Exchange on or around Wednesday May 19. It will list under the ticker ‘SQSP’.

A direct listing means only existing shareholders will sell shares and that Squarespace won’t issue any new shares or raise any cash for the business.

Find out more about how to trade stocks and shares with Forex.com.

Squarespace share price: How much is Squarespace worth?

Squarespace announced it had raised $300 million in March 2021 in a funding round that valued the business at $10 billion, which is likely to be used as a rough benchmark ahead of the listing.

However, the initial valuation could be wildly different once shares start trading. There is no bookbuild process like there would be under a traditional IPO, which usually discovers what kind of price the market is willing to pay. There will be no targeted price range and there are no underwriters guaranteeing to buy any shares not taken up by the public.

This, Squarespace admits, could mean its shares ‘may be volatile, and could, upon listing on the NYSE, decline significantly and rapidly’.

Ultimately, it will come down to supply, which will depend on how many shares existing investors are willing to part with, and the level of demand for the latest tech stock to go public.

It is important to note that Squarespace will have a multi-class share structure that ultimately means its founder Anthony Casalena will retain control over key decisions for the business. The Class A shares being floated carry one vote per share, but the Class B shares in the hands of Casalena carry 10 votes each. This means investors will have minimal say on key matters like the company’s strategy or board appointments. There are also Class C shares in issue that don’t carry any voting rights.

What is Squarespace?

Squarespace was founded in 2003 to ‘enable anyone to easily publish to the web and enable anyone to access the power of great design’. It was originally a blogging service but has expanded over the last two decades into an all-in-one integrated platform that allows individuals or businesses to launch a holistic online presence and manage their digital brand and operations.

In a nutshell, it allows users to build a mobile and web-friendly website that facilitates online commerce, from taking bookings and taking payments to selling subscription content or physical goods. It also integrates social media to ensure the online brand is maintained across the web and provides marketing support in the form of email campaigns, analysing customer data and providing tools to help manage relationships with customers.

This is broken down into Squarespace’s three divisions. The first is Presence, handling the website and social media activity. The second is Commerce, handling all the transaction activity with payments largely handled by partner Stripe. And the third is Marketing.

Squarespace puts design at the centre of its product and aims to be simple to use but powerful, making it suitable for everyone from one-man bands, musicians, artists and other creatives to businesses spanning small independents to large iconic brands.

Notably, Squarespace proudly ensures it does not act as an intermediary between its users and their customers and encourages them to directly interact with them by providing them with ‘the piece of the web they own on their own terms’.

How does Squarespace make money?

Squarespace generates virtually all its revenue from customers paying a subscription in order to maintain their online presence, providing transparent and recurring income for the business. Most of its customers, around 70%, pay an annual subscription with the rest paying on a monthly basis.

This means growth relies on Squarespace securing new subscribers. It has delivered 20 consecutive quarters of subscriber growth and had over 3.6 million of them at the end of 2020, over 22% higher than a year earlier. It will also have to maintain existing subscribers, which will largely depend on the success of their businesses. Users should be sticky so long as their business performs well, and this is supported by the high rate of annual rather than monthly subscriptions.

Squarespace does make a small amount of revenue from non-subscription services, such as revenue-sharing fees with partners and fixed transaction fees its customers have to pay for doing business on the platform.

Is Squarespace profitable?

Squarespace is fast-growing, with revenue up 28% in 2020, and has turned a profit every year since 2016. However, net income almost halved last year to $30.6 million as it raised investment. For example, it raised its marketing budget by 40% during the pandemic to capitalise on the growing demand for digital solutions as business shifted online.

Squarespace has said expenditure and investment will continue to rise alongside revenue over the coming years, which means the company is not guaranteed to translate topline growth to the bottom over the coming years. Plus, don’t expect dividends anytime soon as it plans to reinvest any profits back into the business going forward.

Squarespace ($, thousands)201820192020
Revenue389,863484,751621,149
Gross Profit319,687402,841522,812
Operating Profit54,75661,34040,220
Net Income43,12358,15230,588

What is Squarespace’s strategy?

Squarespace has grown with the internet over the past two decades and the company believes it has only just started to scratch the surface. It has over 3.6 million subscribers but estimates there are around 800 million small businesses and self-employed people globally. The fact over 540,000 new ventures are launched in the US alone every month means there is plenty of new business to win and just under half of the country’s SMEs still don’t have an online presence.

In short, Squarespace’s market has huge potential and that is only set to grow further as commerce moves further online, accelerated by the global pandemic. It also feeds in to the growing movement by brands to cut out the middle man and set up their own direct-to-consumer model.

Squarespace intends to continue investing in its product by adding new and improved design, commerce and marketing tools. Commerce tools will be particularly important for future growth as it looks to become a core platform for transactions worldwide. Gross Merchandise Value, which measures the gross value of the goods and services bought over its platform, will be an important metric to watch.

It will achieve this partly through acquisitions. For example, in March it bought a company named Tock that allows restaurants to handle reservations, table management, takeout orders and other key digital services in a $415 million cash-and-share deal.

Key to its offering is ensuring the platform can scale as its user’s business grows. However, it is also increasing its focus on larger businesses through its Enterprise offering, which should deliver larger income and higher margins.

Squarespace board of directors

Squarespace is led by its founder, Anthony Casalena, who retains control over the company’s key decisions. The business admitted in its prospectus that its future performance ‘depends on the continued services and contributions’ of Casalena, ‘who is critical to the development of our business and growth strategy’.

The full board of directors is comprised of:

  • Founder, CEO and Chair – Anthony Casalena
  • Chief Product Officer – Paul Gubbay
  • Chief Financial Officer – Marcela Martin
  • General Counsel & Secretary – Courtenay O’Connor
  • Directors – Andrew Braccia, Michael Fleisher, Jonathan Klein, Liza Landsman, Anton Levy
squaresspace logo
shopify
Shopify stock pops after Google announces online shopping expansion

Google is deepening its partnership with Shopify by making it easier for the company’s 1.7 million merchants to reach shoppers in Google Search and across some of its other properties.

The move comes as Google and Shopify are ramping up their efforts to compete against Amazon in e-commerce. Amazon is also increasingly competing with Google on search ads for commercial queries, which typically mean a consumer is actively considering a purchase, and is expected to earn 19% of all search ad revenue this year, compared with about 57% for Google, according to eMarketer.

Shares of Shopify popped as much as 4% on the news, closing up more than 3% on the day.

shopify with google

Google made the announcement during its conference for software developers, Google I/O, which kicked off on Tuesday. The company didn’t offer many details about the integration, but it said it will allow Shopify businesses to appear across Google Search, Maps, Lens, Images and YouTube “with just a few clicks.”

In a blog post, Google said this will make Shopify merchants’ products more discoverable across its various properties.

“We believe you deserve the most choice available and we’ll continue to innovate on shopping every step of the way,” said Bill Ready, president of commerce and payments at Google, during a presentation at I/O.

Separately, the company announced other enhancements to its e-commerce functionality: For instance, Google’s Chrome browser will persistently display shopping carts when people open new tabs, so they can return to shopping after doing other tasks.

At the start of the pandemic, Google said it was waiving commission fees for merchants that participate in its “Buy” program, which allows consumers to search for and check out retailers’ products directly on its platform without being directed to retailers’ sites. The company also said it would be opening its platform to third-party providers, including PayPal and Shopify, to allow retailers more buying options outside of its own platform.

Google is trying again to ramp up its e-commerce efforts, as the pandemic has created long-lasting demand for online buying, which Google’s competitors have cashed in on.

In a blog post, the company said, “As we eliminate barriers like fees and improve our technology, we’ve seen a 70% increase in the size of our product catalog and an 80% increase in merchants on our platform.”

Google with shopify
amazon shutting down
Amazon is shutting down its Prime Now fast delivery app

Amazon announced Friday it is shutting down its stand-alone Prime Now platforms and is directing users who want fast delivery on groceries and other goods to order them through the Amazon app or website.

The Prime Now app and website will be retired worldwide by the end of this year, Amazon said. Amazon has said they are shutting it down.

“To make this experience even more seamless for customers, we are moving the experience from a separate Prime Now app onto the Amazon app and website so customers can shop all Amazon has to offer from one convenient location,” said Stephenie Landry, vice president of grocery at Amazon.

Consumers will be able to choose two-hour delivery on essentials and other goods via the Amazon app or website. Two-hour grocery delivery will be available via Amazon Fresh or Whole Foods, both of which are located in Amazon’s app and website.

Additionally, any third-party retailers or local stores that were offered on the Prime Now app will be moved over to Amazon, including Bartell Drugs, a pharmacy chain in Seattle, and Union Square Wines & Spirits in New York City.

Amazon debuted Prime Now in 2014 as a way for members of its Prime subscription service to get books, toys, household essentials and other goods delivered to their doorstep in one or two hours for a small fee. Prime Now initially launched in a handful of cities, but it’s now available in more than 5,000 cities and towns and two-hour delivery is free. In a testament to how much the service has grown, Amazon operates dedicated Prime Now warehouses to fulfill orders.

“In 2014, I wrote a six-page document outlining a service that would allow customers to get last-minute items in about an hour,” Landry wrote in the blog post. “We even gave the project the internal code name ‘Houdini.’ In just 111 days, our team took the concept outlined in that six-page document and turned it into Prime Now, which became the foundation for Amazon’s ultrafast grocery and same-day delivery businesses.”

Amazon’s ambitions in grocery have deepened over the years. It has rolled out multiple services, acquired upscale supermarket chain Whole Foods for $13.4 billion in 2017 and last year launched its own chain of Fresh grocery stores, which has resulted in a somewhat disjointed grocery strategy.

The company has recently taken steps to streamline its grocery offerings. In January, Amazon shuttered its Prime Pantry service, which delivered non-perishable groceries. The company is also rebranding its Go Grocery brand to Amazon Fresh and closing down one of two Go Grocery locations, GeekWire reported this week.

The move to shut down Prime Now’s app and website had already been underway for some time. The company recently began directing users to the Amazon app and website via a pop-up in the Prime Now app.

Additionally, Amazon said it has already discontinued Prime Now’s app and website in India, Japan and Singapore. It also began offering two-hour delivery from Amazon Fresh and Whole Foods on Amazon in 2019.