Lyft donates $1M to the ACLU, condemns Trump’s immigration actions

Ride hailing provider Lyft has taken a strong stance against Trump’s new immigration actions and ban on Muslim refugees (which Rudy Giuliani admitted is exactly what it was intended to be on Fox News on Sunday morning). In an email sent to users, Lyft noted that it is “firmly against these actions, and will not be silent to issues that threat the value of the community.”

This is one of the strongest statements against Trump’s unconstitutional executive orders from a tech company to date, and Lyft is also putting action behind its words: The ride hailing company also announced it will be donating $1 million to the American Civil Liberties Union (ACLU) over the next four years. The ACLU filed suit against Trump’s administration for the refugee ban, and succeeded in getting a temporary stay of the order from a federal judge on Saturday.

Many other Silicon Valley companies have expressed varying levels of opposition to the actions by Trump and his White House, including Google, Microsoft and Apple, but Lyft has done so with a public document (the messages from many others were shared via leaked internal employee emails) and with a clear articulation of why Trump’s actions are wrong on a moral level, not just as a potential hindrance to acquiring top level global talent, or as a threat to current employees who enjoy U.S. visa status.

Uber’s Travis Kalanick released an email to employees noting that the Lyft competitor would be working with drivers potentially affected to provide them legal assistance. He also said he’d raise the issue of the ban’s impact on “innocent people” during a meeting with Trump’s business advisory council on Friday, of which Kalanick is a member. Kalanick also acknowledged that many employees might disagree with his decision to join Trump’s administration in an advisory capacity, and said they enjoy the right to do so. Uber employees have taken to Twitter to do just that, and the company faces calls to boycott its service, and saw physical protests at its San Francisco HQ as a result of Kalanick’s involvement with Trump’s White House.

Here’s the entire letter sent by Lyft:

We created Lyft to be a model for the type of community we want our world to be: diverse, inclusive and safe.

This weekend, Trump closed the country’s borders to refugees, immigrants, and even documented residents from around the world based on their country of origin. Banning people of a particular faith or creed, race or identity, sexuality or ethnicity, from entering the U.S. is antithetical to both Lyft’s and our nation’s core values. We stand firmly against these actions, and will not be silent on issues that threaten the values of our community.

We know this directly impacts many of our community members, their families and friends. We stand with you, and are donating $1,000,000 over the next four years to the ACLU to defend our constitution. We ask that you continue to be there for each other – and together, continue proving the power of community.

John & Logan

Lyft Co-Founders

Featured Image: lyft

Link : https://techcrunch.com/2017/01/29/lyft-donates-1m-to-the-aclu-condemns-trumps-immigration-actions/?ncid=rss

Forget flying cars — passenger drones are the future

In the July 1924 issue of Popular Science, “Ace of Aces” fighter pilot E.V. Rickenbacker told readers to expect “Flying Autos in 20 Years.” Rickenbacker’s flying car would have retractable 12.5-foot wings, a sea-worthy hull and wheels to cruise America’s growing network of highways.

Ninety-three years later, personal cars remain land-bound. But Rickenbacker’s car-like plane still dominates our idea of what a flying car should be. That expectation — ingrained in decades of pop culture and copied by real technologists — has held back innovation.

The winning “flying car” is going to be a passenger drone, and you won’t find it cruising the highways. It will fly only and blend the best of autonomous driving technologies, ridesharing software and drone engineering. And it will hit the friendly skies soon, perhaps within 10 years. The conventional flying car is a dead-end, but the barriers to passenger drones are all surmountable.

The easy part

If you expected to zip around the skies in your personal flying car, I’m sorry to burst your bubble. If every rider had to fly 40 hours to earn an FAA-approved Private Pilot license, there would be no market. The passenger drone must be fully automated, and that is easier than it sounds.

Tesla, Google, Uber and other autonomous car makers are within three to five years of commercializing self-driving cars that require no human oversight. All the machine learning algorithms, sensors and safety systems from that effort will serve passenger drones equally well, if not better. Compared to cars, drones will face fewer unpredictable obstacles in the sky and have far more options for evading accidents.

Let’s give passenger drones some airspace and shift the way we think about personal mobility.

The ridesharing software from companies like Uber and Lyft will be crucial, as well. Besides island-buying billionaires, no one will own passenger drones because they will be prohibitively expensive. Instead, the drones will be offered in taxi or ridesharing services. The Uber and Lyft apps are just what we need for passenger drones. The rider will tap to book a drone, which will fly to the pickup location, land and take off vertically and then fly to the requested address.

The other “easy” but dicey part of passenger drones is the vehicle design. Most of us have seen either the U.S. military’s winged Predator drones or the tiny quadcopter drones flown by enthusiasts at parks. For passengers, what we need is a blend — a large quadcopter with fixed wings that can sustain flight with a heavy load yet maneuver in cluttered urban environments. It might resemble a larger version of the newest Amazon delivery drone.

The hard part

Passenger-drone development is further along than many people realize. In June 2016, the Chinese firm EHang received clearance from Nevada to test the world’s first passenger drone. The Guardian reports that the drone can fly at up to 11,500 feet at 63 mph, but only for 23 minutes. Uber believes that Uber Elevate, an on-demand air transportation service, is achievable within a decade. Its fleet of electric Vertical Take-off and Landing aircraft (VTOLs) would resemble Lilium Aviation’s jet, which just raised a $10 million Series A. Soon enough, drone makers like DJI, 3D Robotics, Hubsan and even Amazon may put their own passenger vehicles in the race.

These companies will run into two main barriers:

Charging. Currently, battery life is the biggest hurdle for drone makers that wish to increase flight times. A breakthrough in battery technology is no guarantee, but no reason to wait.

Passenger drones will simply need infrastructure for mid-air charging. LaserMotive, a Seattle-based wireless charging startup, shows promise here. Back in 2012, they ran an experiment with Lockheed Martin to extend the flight time of the Stalker Unmanned Aerial System. Their “laser power beaming” kept the drone in flight by targeting lasers at photovoltaic cells (i.e. solar panels) mounted on the vehicle. They sustained flight for 48 hours, marking a 2,400 percent improvement over the usual flight time.

Beaming high-energy lasers into the skies sounds sketchy, but not if drone infrastructure minimizes and compartmentalizes accidents. Cities could designate drone highways and restrict laser charging to those aerial thoroughfares. mid-air charging would drastically extend flight times and flights per day, as drones would never have to land for charging.

Regulation. Unfortunately, the FAA has been slow to address the drone industry’s call for comprehensive regulations. The existing rules, updated by the FAA in August 2016, insist that drones must be within line of sight and must always be controlled by a live operator. They will strangle further innovation — at least in the U.S.

Other countries have welcomed autonomous drones with open arms. For example, Delft, a city in the Netherlands, has agreed to host the first fully autonomous drone network, complete with docking stations and drone rentals. Moreover, Flirtey and Domino’s chose New Zealand for the world’s first commercial drone delivery service because of the country’s friendly regulations. They airlifted the first pizza to customers on November 16.

The U.S. could make a comeback by testing passenger drones with emergency services. Regulators could clear ambulance and search-and-rescue drones for life-and-death situations. In cases of cardiac arrest, for instance, victims need treatment within six minutes for a chance at survival. For people who live in New York, where the average ambulance response time was over 12 minutes in 2015, why not deploy a paramedic and a defibrillator in a drone? Why not take a risk on saving people who would have no chance otherwise? Such trial cases could break down regulatory resistance to autonomous drones.

A better symbol of progress

Movies, books and TV shows have set the expectation that innovators would, eventually, deliver personal flying vehicles. Although many discount this as mere science fiction, the truth is we’re almost there.

While we won’t get the archetypal flying car that E.V. Rickenbacker imagined, we will get something even better. Passenger drones could save Americans from spending 6.9 billion hours per year stuck in traffic. More importantly, emergency passenger drones could prevent thousands of needless deaths. Let’s give passenger drones some airspace and shift the way we think about personal mobility.

Featured Image: gerenme/Getty Images

Link : https://techcrunch.com/2017/01/28/forget-flying-cars-passenger-drones-are-the-future/?ncid=rss

Weekly Roundup: AppDynamics sells to Cisco ahead of IPO, CZI buys Meta

This week saw numerous acquisitions, three major tech lawsuits and government agencies going rogue on Twitter. These are the biggest stories to catch you up on this week’s tech news.

1. AppDynamics, which helps companies monitor application performance, was supposed to go public this week. But the IPO was called off in favor of a giant $3.7 billion acquisition from Cisco. The IPO would have valued AppDynamics at around $2 billion. While the acquisition was part of Cisco’s long-term focus on cloud software, at its core, the deal was actually a data play.

2. Mark Zuckerberg and Priscilla Chan’s $45 billion philanthropy organization made its first acquisition. The Chan Zuckerberg Initiative bought up Meta, an AI-powered research search engine. Meta’s AI recognizes authors and citations between papers so it can surface the most important research instead of just what has the best SEO. It also provides free full-text access to 18,000 journals and literature sources.

3. Donald Trump finished his first week as the president of the United States. Minutes after the inauguration, pages related to LGBT rights and all mentions of climate change were removed from the WhiteHouse.gov site. He signed an executive order that could jeopardize a six-month-old data transfer framework enabling EU citizens’ personal data to flow to the U.S. for processing.

4. President Trump’s Twitter presence is like none other, but his behavior on the social network may prove riskier than initially expected. In addition to the unsecured Android phone he is still reportedly using to access his account, the typos, defamation and informal management of @realDonaldTrump make him an even larger target for hackers.

5. The current administration’s efforts to censor science have already begun. A day after Trump’s inauguration, the National Park Service retweeted pictures of the crowd size at the event. This didn’t sit well with Trump, who reportedly demanded that all National Park bureaus stop tweeting altogether.

6. Apple doesn’t do lawsuits lightly. Hot on the heels of a billion-dollar royalty suit against Qualcomm in the U.S., it’s taking the San Diego-based chipmaker to intellectual property court in Beijing.

7. The lawsuit threatening the future of Facebook’s Oculus VR may have just gotten a lot more expensive. The case claims that Oculus VR stole core VR tech from a ZeniMax Media subsidiary. ZeniMax Media asked the jury to rule against Oculus VR and award $2 billion in compensation, as well as another $2 billion in punitive damages.

8. Tesla has sued the former director of its Autopilot programs, Sterling Anderson. The suit alleges that Anderson tried to poach employees with the intent of starting his own autonomous driving company called Aurora, and of stealing proprietary information from Tesla.

9. Airbnb may be about to acquire payments startup Tilt. The move makes sense, given Airbnb’s recent expansion into experiences. Acquiring Tilt would give Airbnb a leg up in the payments space, specifically payments around social gatherings and events.

10. Alphabet reported mixed earnings for its fourth quarter, and it’s clear that the company’s bets like Play, Google Cloud and its hardware division beyond search are paying off. Microsoft also revealed financials. The company’s growth was led by its Office and cloud segments, which the company is betting on to fuel growth in the future. Verizon (which owns AOL which owns TechCrunch) fell short of analyst expectations and PayPal continued to see strong revenue growth through its Q4.

facebook-stories-ios

11. Facebook launched yet another Snapchat clone called Facebook Stories in Ireland on iOS and Android, with plans to bring it to more countries in the coming months. The feature will sit right above the News Feed. The big question here is how Facebook will display this on desktop.

Link : https://techcrunch.com/2017/01/27/weekly-roundup-appdynamics-sells-to-cisco-ahead-of-ipo-czi-buys-meta/?ncid=rss

Leapfrogging in higher ed

In the late 1980s, China’s growing economy demanded connectivity as it struggled to reach the United States’ 90 percent household telephone penetration rate. As it turned out, wiring China was a physical and economic impossibility: social and technological realities stood in stark opposition to large-scale needs.

And yet, in just a few short years, China’s telecommunications progress came to define what we now describe as “leapfrogging” — pioneering the application of new technologies to bypass the older framework in place to unlock their 1.5 billion citizens’ economic potential.

Today, higher education faces a similar dilemma. Against a backdrop of upcredentialing, the imperative for degree completion has never been greater. And yet, former President Obama’s call for the United States to lead the world in college completion by 2020 remains a distant possibility.

No public or private entities in the world have the money to build the campuses — let alone develop the quality faculty — needed to produce the billions of college graduates our global economy demands. MOOCs have failed to live up to their democratic promise of access, completion or meaningful learning outcomes. And even if higher education as we know it could scale over time, consumer preferences are evolving even faster. In an uncertain economy, many students are increasingly skeptical that degrees are a worthwhile investment of time and money.

Should we throw in the towel? Or, is higher education poised for a revolution on par with the telecom explosion of the past two decades?

Here’s what we know: The degree is still the coin of the realm in our information economy, but there is unprecedented demand for — and recognition of — non-degree credentials. Indeed, 41 million adults currently hold some form of non-degree credential, and there is growing acknowledgment that tomorrow’s students, dubbed “the new normal” by former U.S. Under Secretary of Education Ted Mitchell, will demand a mix of non-traditional programs and partnerships providing learning opportunities across a work life that is likely to span 60 years or more.

Traditional higher ed can’t scale to meet human capital demands, and technology can’t replace faculty.

As it turns out, a 100-year-old startup that’s part of the Harvard University community may serve as a model for higher education’s new paradigm. Since 1910, Harvard Extension School has evolved from a series of $5 evening courses into an array of 1,200 open access courses, of which 500 are online. Extension programs like Harvard’s are able to offer courses, professional certificates and degrees to adult learners at typical in-state tuition costs with no endowment support. In many cases, they even create a surplus for their universities. This model may provide the seeds of a paradigm shift for institutional leaders and policymakers re-imagining the role of higher education in the face of a daunting, global challenge.

Here’s why: Traditional higher ed can’t scale to meet human capital demands, and technology can’t replace faculty. But while faculty will remain at the center, extended through the internet and technology-enabled teaching, coaches and mentors will play an increased role in ensuring that students have the support they need to achieve learning outcomes.

“New normal” students need a different type of support as they pursue continuous education across a work life that is likely to include 30 or more distinct jobs and three distinct careers. Different points in that learning life demand different credentials. A BA has huge value at the start of a career, while later one might determine that a graduate certificate could help propel their career more efficiently than a lengthier, costlier master’s degree.

It may seem like a surprising statement coming from the ivy walls of Harvard, but the unbundling of higher education need not be a threat to the traditional university. Certificates in particular often serve as a stepping stone toward a degree: 20 percent of undergraduate certificate holders go on to obtain two-year degrees, and an additional 13 percent go on to earn bachelor’s degrees.

For more than 100 years higher education has largely resisted change — and functioned reasonably well without an intense focus on the complex life needs of adult and part-time learners. But like the disruption in the telecom industry, higher ed is poised for its leapfrogging moment. Without this change, how will we reach the 30 million Americans and the billions of global citizens counting on the promise of higher education to advance themselves, their communities and their countries in our complex world?

Featured Image: Prasit Rodphan/Shutterstock

Link : https://techcrunch.com/2017/01/27/leapfrogging-in-higher-ed/?ncid=rss

Rogue National Park Service Twitter account says it’s no longer run by government employees…but maybe it never was

The rogue government Twitter account, AltUSNatParkService, which claimed it was being run by current park rangers, says it has now handed off control of its Twitter account to “several activists and journalists who believe they can continue in the same spirit.”

The move has led some to question if the account was, in fact, ever operated by disgruntled government employees in the first place and, if it was, whether or not it just squandered the power the account held to serve as a means of resisting the Trump administration.

If you haven’t been following closely, the saga of the rogue government Twitter accounts can be a little confusing.

This week, the Trump administration issued gag orders affecting several government agencies, including the Environmental Protection Agency, the Department of Agriculture, the Department of Health and Human Services and the Interior Department, which oversees the National Park Service. The orders limited the agencies’ ability to communicate with the public, including through social media postings.

But one official National Park Service account, @BadlandsNPS (Badlands National Park), seemed to be taking a stand against the new administration. The account was tweeting facts about climate change, apparently in defiance of President Trump’s position on the matter.

When those tweets were later deleted, the public cried that government censorship was at hand.

The National Park Service, however, issued a statement that explained the tweets had been posted by a former employee who was not supposed to be authorized to use the park’s account:

“The park was not told to remove the tweets but chose to do so when they realized their account had been compromised. At this time, National Park Service social media managers are encouraged to continue the use of Twitter to post information relating to public safety and park information, with the exception of content related to national policy issues.”

Not everyone bought this explanation, naturally.

Soon thereafter, rogue government Twitter accounts representing the oppressed agencies started to appear.

The first and most infamous was @AltNatParkSer (AltUSNatParkService), which tweeted defiantly: “Can’t wait for President Trump to call us FAKE NEWS. You can take our official twitter, but you’ll never take our free time!”   

The account claimed to be run by park rangers, but said they couldn’t identify themselves.

“…we do not feel it is safe or in the public interest to identify ourselves. We have been advised of this by [several] journalists,” the account tweeted on Wednesday.

The whole thing makes for a good story, and one which it’s all too easy to envision: people who have dedicated their lives to protecting the earth, forced into silence by their government, leaving them no choice but to anonymously begin tweeting the truth.

Yet, there has never been proof that @AltNatParkSer was ever run by disgruntled government employees. Journalists tried, and failed, to confirm this. And now, the account says it has handed itself off to “activists and journalists,” which just seems odd.

First of all, if these employees felt so strongly about defying Trump, why ditch the account now that it has worldwide attention? Wouldn’t that have been the end goal?  And who are these “reporters” supposedly working with the activists instead of just covering them?

And why does an account that’s now supposedly run by “former scientists” need to employ journalists for fact checking? Are scientists capable of that? Isn’t factual information the point of science?

There are a lot of questions here, and still too few answers.

For what it’s worth, the account claims they had grown fearful of criminal prosecution, because they were using the NPA’s official logo.

It’s just as reasonable, though, to assume that the original account holders were fearful that they may end up being exposed as mere activists, not defiant government workers.

After all, the account never verified itself through Twitter or any other trusted third-party — such as through one of these supposed “journalist” partners, or a trusted, independent organization like EFF, as was suggested by VICE’s Motherboard.

And when pressed about its lack of verification, it got a little snippy:

Despite the lack of proof, many took the account at its word because its existence fits a certain narrative.

To further confuse the matter, the popularity of @AltNatParkSer prompted a wave of other “Alt” government accounts to appear.

Suddenly we had @BadHombreNPS, @AltMtRainierNPS, @RogueNASA, @AltForestServ, @AltHHS, @Alt_FDA@ActualEPAFacts, @AltUSDA@AlternativeNWS and others to track.

Some of these also claim to be run by government employees, while others explicitly said they are not. Others say they’re operated by a combination of employees and activists, and some say nothing about their creators at all.

Some are now downplaying any “official” affiliation, or saying they’re going to change their logo from their (illegally used) official one. In fact, @AltMtRainierNPS just asked Twitter if doing so would help it get verified. (Twitter didn’t respond.)

Another, @AlternativeNWS, says it’s also trying to hand off its account to non-government people for “safety.”

As these accounts are supposedly transferred, many Twitter users are thanking them for being so “brave,” “strong” and defiant. Unfortunately, we have no proof that any government employees ever actually took a stand. We only have the anonymous account’s word, and we’re being asked to believe on faith.

While the spirit of resistance is certainly understandable, the emergence of “rogue government Twitter” has only led to a lot of confusion.

It doesn’t help the resistance to further blur the line between truth and fact. It doesn’t help if the media reports an anonymous Twitter account’s story as being fact, when the source isn’t verified.

And it doesn’t help for there now to be a plethora of government-esque accounts being run by a range of unidentified parties tweeting things they want everyone to believe are facts.

Featured Image: NPS/Kurt Moses (public domain)

Link : https://techcrunch.com/2017/01/27/rogue-national-park-service-twitter-account-says-its-no-longer-run-by-government-employees-but-maybe-it-never-was/?ncid=rss

Zuckerberg defends immigrants threatened by Trump

While other tech leaders glad-hand with The Donald, Mark Zuckerberg is facing him head on. Today the Facebook CEO called out the president for his unAmerican views that demonize immigrants, while also tactfully encouraging the few positive policies and comments Trump has offered on the subject.

You should read Zuckerberg’s full Facebook post on the topic embedded at the bottom of this story, but the highlights include:

“Like many of you, I’m concerned about the impact of the recent executive orders signed by President Trump.

We need to keep this country safe, but we should do that by focusing on people who actually pose a threat.

We should also keep our doors open to refugees and those who need help.

That said, I was glad to hear President Trump say he’s going to ‘work something out’ for Dreamers — immigrants who were brought to this country at a young age by their parents…over the next few weeks I’ll be working with our team at FWD.us to find ways we can help.

I’m also glad the President believes our country should continue to benefit from ‘people of great talent coming into the country.’

We are a nation of immigrants, and we all benefit when the best and brightest from around the world can live, work and contribute here.”

Many business moguls see cooperating with Trump as crucial to protecting their businesses, but in this case, Zuckerberg’s oppositional perspective is also vital to Facebook’s success. The company employs a large number of immigrants in the U.S. via the high-skilled H-1B visa. Additional restrictions on the already overloaded H-1B visa program could prevent Facebook from hiring the talent it needs to keep building the world’s most popular social networking products.

zuckerberg-in-government

Zuckerberg meeting with heads of state

Trump has railed against globalization’s inevitable effect of manufacturing and other jobs moving to countries with looser worker protection and minimum wage laws. Trump’s election happened in part because he harnessed the fear of less-educated American whites looking for an easy answer or scapegoat for their financial troubles.

While the country will have to come to grips with how to deal with the coming unemployment crisis fueled by globalization and automation, and Zuckerberg doesn’t claim to have an answer here, he’s brave to stick up for inclusive values that underpin the modern American spirit. Even if that means sparring with the other most powerful man in the world.

Meanwhile, Zuckerberg today announced he’s dropping his Hawaiian land lawsuits that were part of him securing land on the island to build a home, calling the suits “a mistake” and planning a different route forward. The CEO has learned a lot about listening to the public since the early days of Facebook’s privacy missteps.

In other Facebook-Trump news, COO Sheryl Sandberg yesterday posted her stern disagreement with Trump signing an executive order that will pull funding from foreign aid organizations that provide counseling about family planning options including abortion. She wrote, “Women’s rights are human rights — and there is no more basic right than health care. Women around the world deserve our support.”

Zuckerberg has said he’s not currently planning to run for president. But it’s clear he still plans to use his immense audience, power and fortune to push for progressive policies that don’t rely on turning Americans and the nations of the world against each other.

Link : https://techcrunch.com/2017/01/27/trump-faces-resistance/?ncid=rss

Google CEO Sundar Pichai fears impact of Trump immigration order, recalls staff

Google CEO Sundar Pichai has outlined his disapproval of the impact arising from Trump’s dangerous, inhumane and short-sighted sweeping immigration order, which imposes for at least 90 days a block on entry to the U.S. for citizens (including valid visa holders) from seven countries, blocks indefinitely refugee admittance from Syria and also caps the total number of refugees allowed to enter the U.S. in 2017 at 50,000, less than half the number that came into the country in 2016. The measure also suspends admittance of all refugees for a period of 120 days.

Pichai distributed an internal memo, seen by both Bloomberg and the Wall Street Journal, which said that Google was “upset about the impact of this order,” specifically as it relates to restrictions placed upon “Googlers and their families,” as well as how it could impose “barriers to bringing great talent to the U.S.” Pichai also noted that it has been “painful to see the personal cost of this executive order on our colleagues” in the memo.

Google apparently recalled all employees potentially impacted who were abroad in an effort to get them back in the U.S. before the order took effect, and Pichai noted in his memo that a minimum of 187 Google employees were directly affected by the ban. Google offered an official statement on the matter to Bloomberg:

We’re concerned about the impact of this order and any proposals that could impose restrictions on Googlers and their families, or that create barriers to bringing great talent to the U.S. We’ll continue to make our views on these issues known to leaders in Washington and elsewhere.

Much of the sentiment of the note focused on the impact to Google and its employees, but Pichai did include a more far-reaching comment that “we wouldn’t with this fear and uncertainty on anyone – and especially not our fellow Googlers,” ending with an affirmation that “in times of uncertainty, our values remain the best guide.”

On Friday, Facebook CEO Mark Zuckberberg also posted a note to his personal Facebook page about his concern over “the impact of the recent executive orders signed by President Trump,” though his note falls short of strongly condemning the actions and in fact quickly turns to highlighting how “glad” Zuckeberberg is to hear about Trump’s potential continued support of the DREAMers program, which allows special exceptions for undocumented immigrants who entered the U.S. at a young age, and for Trump’s stated support for continuing to bring talent to the U.S. from outside states.

The internal memo from Google is more generally critical than Zuckerberg’s politic statement, but so far none of these tech leaders have come out with an outright condemnation of Trump’s sweeping, harmful orders, which are also worded to prioritize refugee applicants from minority states in nations where the majority of individuals are Muslim, effecting in practice Trump’s campaign promise of a ban on Muslim immigration. It’s likely that these orders will face strong challenges from courts and lawmakers, but in the meantime it’s already impacting would be refugees who were otherwise set to enter the U.S.

Featured Image: Ramin Talaie/Getty Images/Getty Images

Link : https://techcrunch.com/2017/01/28/google-ceo-sundar-pichai-fears-impact-of-trump-immigration-order-recalls-staff/?ncid=rss

AI’s open source model is closed, inadequate, and outdated

Artificial Intelligence is big. And getting bigger. Enterprises that have experience with machine learning are looking to graduate to Artificial Intelligence based technologies.

Enterprises that have yet to build a machine learning expertise are scrambling to understand and devise a machine learning and AI strategy. In the midst of the hype, confusion, paranoia and the risk of left behind, the slew of open source contribution announcements from companies like Google, Facebook, Baidu, Microsoft (through projects such as Tensorflow, BigSur, Torch, SciKit, Caffe, CNTK, DMTK, Deeplearning4j, H2O, Mahout, MLLib, NuPIC, OpenNN etc.) offer an obvious approach to getting started with AI & ML especially for enterprises outside the technology industry.

Find the project, download, install…should be easy. But it is not as easy as it seems.

The current Open Source model is outdated and inadequate for sharing of software in a world run by AI-enabled or AI-influenced systems; where users could potentially interact with thousands of AI engines in the course of a single day.

It is not enough for the pioneers of AI and ML to share their code. The industry and the world needs a new open source model where AI and ML trained engines themselves are open sourced along with the data, features and real world performance details.

AI and ML enabled and influenced systems are different from other software built using open source components. Software built using open source components is still deterministic in nature i..e the software is designed and written to perform exactly the same way each time each time it is executed. AI & ML systems especially artificially intelligent systems are not guaranteed to exhibit deterministic behavior. These systems will change their behavior as the system learns and adapts to new situations, new environments and new users. In essence, the creator of the system stands to lose control of the AI as soon as the AI is deployed in the real world. Yes, of course, creators can build in checks and balances in the learning framework. However, even within the constraints baked in the AI, there is a huge spectrum of interpretation. At the same time, the bigger challenge that faces a world encompassed in AI is the conflict borne out of the human baked in constraints.

AI & ML systems especially artificially intelligent systems are not guaranteed to exhibit deterministic behavior. These systems will change their behavior as the system learns and adapts to new situations, new environments and new users. In essence, the creator of the system stands to lose control of the AI as soon as the AI is deployed in the real world. Yes, of course, creators can build in checks and balances in the learning framework. However, even within the constraints baked in the AI, there is a huge spectrum of interpretation. At the same time, the bigger challenge that faces a world encompassed in AI is the conflict borne out of the human baked in constraints.

Yes, of course, creators can build in checks and balances in the learning framework. However, even within the constraints baked in the AI, there is a huge spectrum of interpretation. At the same time, the bigger challenge that faces a world encompassed in AI is the conflict borne out of the human baked in constraints.

Consider the recent report of Mercedes chairman von Hugo being quoted as saying that Mercedes self-driving cars would choose to protect the lives of their passengers over lives of pedestrians. Even though the company later clarified that von Hugo was misquoted, this exposes the fundamental question of how capitalism will influence the constraints baked into AI.

robot, technology, future, futuristic, business, economy, business, money, dollar, bill, high tech, cyber, cyber technology, data, artificial intelligence, 3D, metal, blue background, studio, science, sci fi, hand, gesture, robotic, tech, illustration, innovation, shiny, chrome, silver, wires, concept, creative

Capitalism and the Ethics of AI

If the purpose of an enterprises is to drive profits, how soon would it be before products and services start hitting a market that depicts the AI based experience as a valued added, differentiating experience and asks the buyer to pay a premium for this technology?

In this situation, the users that are willing and able to pay for the differentiated experience will gain an undue advantage over other users. Because enterprises will try and recoup their investments into AI, this technology will be limited to those that can afford the technology. This will lead to constraints and behavior baked into the AI that effectively benefits, protects or gives preference to the paying users.

Another concern is the legal and policy question of who is responsible for malfunctioning or suboptimal behavior of AI & ML enabled products. Does the responsibility rest with the user, the service provider, the data scientist or the AI engine? How is the responsibility (and blame) assigned? Answering these questions requires that the series of events leading to the creation and usage of AI and ML can be clearly described and followed.

AI to AI Interactions

3D render of a robot trying to solve a wooden cube puzzle

3D render of a robot trying to solve a wooden cube puzzle

AI – AI Conflicts

Given the possibly non-deterministic nature of how AI enabled products would and could behave in previously unobserved interactions, the problem is magnified in scenarios where AI-enabled products interact with each other on behalf of two or more different users. For example, what happens if two cars being driven and operated by two independent AI engines (built by different companies with different training data and features and independently configured biases and context) approach a stop sign or are heading towards a crash. Slightly differences and variations in how these systems approach and react to similar situations can have unintended and potentially harmful side effects.

Bias Leakage

Another potential side effect of interacting AI engines magnifies the training bias risk. For example, if a self-driving car observes another self-driving car protecting the passengers at the cost of pedestrians and observes that this choice ensures that the other car is able to avoid an accident, it’s “learning” would be to behave similarly in a similar situation.  This can lead to bias leakage where an independently trained AI engine can get influenced (positively or negatively) from another AI engine.

Learning Agility

Even when similar AI engines are offered with the same learning data, differences in the training environments and the infrastructure used to perform the training can cause the training and learning to proceed at different rates and derive different conclusions as a result. These slight variations could, over time, lead to significant changes in behavior of the AI engine with unforeseen consequences.

Stale and “Forgotten” AI Engines and AI junkyards

In a world of several products enabled through AI, what happens as products are abandoned or go extinct. The embedded AI can go frozen in time leading to the creation of an AI junkyard. These abandoned AI enabled products that are a culmination of learnings from their environment and context up until a point in time, if resurrected for any reason in a different time, environment or context can again lead to unpredictable or undesirable effects.

robot-revolution

We need a new model for open source AI that provides a framework for addressing some of the problems listed above. Given the nature of AI, it is not enough to open source the technology used to build AI and ML engines and embed them into products. In addition, similar to scientific research, the industry will need to contribute back actual AI and ML engines that can form the basis of new and improved systems, engines and products.

Baselining, Benchmarks, and Standards

For all key scenarios such as self-driving cars, photo recognition, speech to text etc, especially with multiple service providers, the industry needs the ability to define a baseline and standards against which all other new or existing AI engines are evaluated and stack ranked (for example, consider the AI equivalent of 5-Star Safety Ratings from the NHSTA for self-driving cars). Defining an industry acceptable and approved benchmark for key scenarios can ensure that service providers and consumers can make informed decisions about picking AI & ML enabled products and services. In addition, existing AI engines can be constantly evaluated against the benchmarks and standards to ensure that the quality of these systems is always improving.

Companies building AI and ML models should consider contributing entire AI and ML models to open source (beyond contributing the technology and frameworks to build such models). For example, even 5-year-old models of image recognition at Google or Speech to Text models from Microsoft could spark much faster innovation and assimilation of AI & ML in other sectors, industries or verticals sparking a self-sustaining loop of innovation. Industries outside tech can use these models to jump-start their own efforts and contribute their learnings back to the open source community.

Bias Determination

Bias determination capabilities are required to enable biases encoded into AI and ML engines to be uncovered and removed as soon as possible. Without such capabilities, it will be very hard for the industry to converge on universal AI engines that perform consistently and deterministically across the spectrum of scenarios. Bias determination and removal will require the following support in open source model for AI.

Data Assumptions and Biases

AI-enabled product designers need to ensure that they understand the assumptions and biases made and embedded in the AI & ML engine. Products that interact with other AI enabled products need to ensure that they understand and are prepared to deal with the ramifications of the AI engine’s behavior. To ensure that consumers or integrators of such AI and ML models are prepared, the following criteria should be exposed and shared for each AI and ML model.

Collection Criteria

How is the data collected? What are the data generators? How often, where, when, how and why is the data generated? How is it collected, staged and transported?

Selection Criteria

How is the data selected for training? What are the criteria for data not being selected? What subset of data is selected and not selected? What are the criteria that define high-quality data? What are the criteria for acceptable but not high-quality data?

Processing Criteria

How is the data processed for training? How is the data transformed, enriched and summarized? How often is it processed? What causes scheduled processing to be delayed or stopped.

Feature Assumptions and Biases

AI and ML models are trained through the inspection of features or characteristics of the system being modeled. These features are extracted from the data and are used in the AI and ML engine to predict a behavior of the system or classify new signals into desired categories to prompt a certain action or behavior from the system. Consumers and integrators with AI models need to have a good understanding of not only what features were selected for developing the AI model but also what were all the features considered and not selected including the reason for their rejection. In addition, visibility into the process and insights used to determine the training features will need to be documented and shared.

Blind Spots Removal

Due to the built-in biases and assumptions in the model, AI and ML engines can build up blind spots that limit their usefulness and efficacy in certain situations, environments, and context.

Blind Spot Reporting and Feedback Loops

Another key feature of the open source model for AI and ML should be the ability to not only determine whether or not a particular model has blind spots but also have the ability to contribute back data (real life examples) to the AI model that could be used to remove these blind spots. This is very similar, in principle, to email spam reporting by users where the spam detection engine can use the newly provided spam examples to update its definition of spam and the filter required to detect it.

Collaborative Blind Spot Removal

Another feature of the ideal open source protocol would be the sharing of data between various service providers with each other to enable shared and collaborative blind spot removal. Consider the Google Self-Driving Car and Tesla’s autopilot. Google has covered around 2Million miles in autonomous driving mode whereas Tesla has covered almost 50M millions of highway driving. If we look beyond the fact that both of these companies are competitors, their data sets contain a lot of relevant data for avoiding crashes and driver/passenger/pedestrian safety. Both of these can leverage the other data sets and improve their own safety protocols and procedures. Possibly, such data should be part of the open source model for maximum benefit to the industry and the user base.

 

For AI and ML to truly revolutionize and disrupt our lives and offer better, simpler, safer and more delightful experiences, AI and ML needs to be included in as many scenarios and use cases across several industries and verticals. To truly jumpstart and accelerate this adoption, open sourcing the frameworks to build AI and ML engines is not enough. We need a new open source model that enables enterprises to contribute and leverage not just the AI and ML build technology but entire trained models that can be improved or adjusted or adapted to a new environment, baselining and standards for AI & ML in a particular scenario so that new AI/ML can be benchmarked against these standards. In addition, the information that reveals the assumptions and biases in AI & ML models (at the data or feature level) and feedback loops that enable consumers of AI & ML models to contribute back important data and feedback to all AI & ML products serving a certain use case or scenario also become critical. Without such an open source model, the world outside the technology sector will continue to struggle in its adoption of AI & ML.

Featured Image: Getty Images

Link : https://techcrunch.com/2017/01/28/ais-open-source-model-is-closed-inadequate-and-outdated/?ncid=rss

Artificial intelligence and the law

Laws govern the conduct of humans, and sometimes the machines that humans use, such as cars. But what happens when those cars become human-like, as in artificial intelligence that can drive cars? Who is responsible for any laws that are violated by the AI?

This article, written by a technologist and a lawyer, examines that future of AI law.

The field of AI is in a sort of renaissance, with research institutions and R&D giants pushing the boundaries of what AI is capable of. Although most of us are unaware of it, AI systems are everywhere, from bank apps that let us deposit checks with a picture, to everyone’s favorite Snapchat filter, to our handheld mobile assistants.

Currently, one of the next big challenges that AI researchers are tackling is reinforcement learning, which is a training method that allows AI models to learn from its past experiences. Unlike other methods of generating AI models, reinforcement learning lends itself to be more like sci-fi than reality. With reinforcement learning, we create a grading system for our model and the AI must determine the best course of action in order to get a high score.

Research into complex reinforcement learning problems has shown that AI models are capable of finding varying methods to achieve positive results. In the years to come, it might be common to see reinforcement learning AI integrated with more hardware and software solutions, from AI-controlled traffic signals capable of adjusting light timing to optimize the flow of traffic to AI-controlled drones capable of optimizing motor revolutions to stabilize videos.

How will the legal system treat reinforcement learning? What if the AI-controlled traffic signal learns that it’s most efficient to change the light one second earlier than previously done, but that causes more drivers to run the light and causes more accidents?

Traditionally, the legal system’s interactions with software like robotics only finds liability where the developer was negligent or could foresee harm. For example, Jones v. W + M Automation, Inc., a case from New York state in 2007, did not find the defendant liable where a robotic gantry loading system injured a worker, because the court found that the manufacturer had complied with regulations.

It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions.

But in reinforcement learning, there’s no fault by humans and no foreseeability of such an injury, so traditional tort law would say that the developer is not liable. That certainly will pose Terminator-like dangers if AI keeps proliferating with no responsibility.

The law will need to adapt to this technological change in the near future. It is unlikely that we will enter a dystopian future where AI is held responsible for its own actions, given personhood and hauled into court. That would assume that the legal system, which has been developed for over 500 years in common law and various courts around the world, would be adaptable to the new situation of an AI.

An AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless. A criminal courtroom would be incompatible with AI (unless the developer is intending to create harm, which would be its own crime).

But really the question is whether the AI should be liable if something goes wrong and someone gets hurts. Isn’t that the natural order of things? We don’t regulate non-human behavior, like animals or plants or other parts of nature. Bees aren’t liable for stinging you. After considering the ability of the court system, the most likely reality is that the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.

This likely will mean convening a group of leading AI experts, such as OpenAI, and establishing a standard that includes explicit definitions for neural network architectures (a neural network contains instructions to train an AI model and interpret an AI model), as well as quality standards to which AI must adhere.

Standardizing what the ideal neural network architecture should be is somewhat difficult, as some architectures handle certain tasks better than others. One of the biggest benefits that would arise from such a standard would be the ability to substitute AI models as needed without much hassle for developers.

Currently, switching from an AI designed to recognize faces to one designed to understand human speech would require a complete overhaul of the neural network associated with it. While there are  benefits to creating an architecture standard, many researchers will feel limited in what they can accomplish while sticking to the standard, and proprietary network architectures might be common even when the standard is present. But it is likely that some universal ethical code will emerge as conveyed by a technical standard for developers, formally or informally.

The concern for “quality,” including avoidance of harm to humans, will increase as we start seeing AI in control of more and more hardware. Not all AI models are created the same, as two models created for the same task by two different developers will work very differently from each other. Training an AI can be affected by a multitude of things, including random chance. A quality standard ensures that only AI models trained properly and working as expected would make it into the market.

For such a standard to actually have any power, we will most likely need some sort of government interference, which does not seem too far off, considering recent talks in British parliament regarding the future regulation of AI and robotics research and applications. Although no concrete plans have been laid out, parliament seems conscious of the need to create laws and regulations before the field matures. As stated by the House of Commons Science and Technology Committee, “While it is too soon to set down sector-wide regulations for this nascent field, it is vital that careful scrutiny of the ethical, legal and societal dimensions of artificially intelligent systems begins now.” The document also mentions the need for “accountability” when it comes to deployed AI and the associated consequences.

Featured Image: DAMIEN MEYER/AFP Creative/Getty Images

Link : https://techcrunch.com/2017/01/28/artificial-intelligence-and-the-law/?ncid=rss

Tech reacts to Trump’s immigration ban

President Donald Trump signed an executive order on Friday that temporarily halted the admission of refugees, indefinitely banned the admission of refugees from Syria, and stopped citizens of several Muslim-majority countries from entering the U.S. The American Civil Liberties Union has already filed a legal challenge to the order.

The order is so sweeping that it also includes any green card and visa holders from these countries. So if you were a citizen of these countries (Iran, Iraq, Syria and Sudan. Libya, Yemen and Somalia) and had the bad luck of being outside of the U.S. at the time the order went into effect, you’re now barred from entering the country for at least the next 90 days. Unsurprisingly, that’s already affecting the employees of many of the largest tech companies, which tend to draw from a global talent pool.

We know that Google already recalled its employees from abroad — though chances are the alert came too late to allow anybody to travel back to the U.S. in time. “We’re concerned about the impact of this order and any proposals that could impose restrictions on Googlers and their families, or that create barriers to bringing great talent to the U.S.,” the company wrote in an official statement. “We’ll continue to make our views on these issues known to leaders in Washington and elsewhere.”

Facebook founder and CEO Mark Zuckerberg, too, yesterday noted in a Facebook post that he is “concerned about the impact of the recent executive orders signed by President Trump” though he also added that he was “glad” that Trump was willing “to ‘work something out’ for Dreamers” and that the President “believes our country should continue to benefit from ‘people of great talent coming into the country.’”

Facebook added in a statement today, “We are assessing the impact on our workforce and determining how best to protect our people and their families from any adverse effects.”

Microsoft told us that it is already providing legal assistance to its employees affected by this: “We share the concerns about the impact of the executive order on our employees from the listed countries, all of whom have been in the United States lawfully, and we’re actively working with them to provide legal advice and assistance.”

Microsoft CEO Satya Nadella spoke out in favor of immigration in a post on LinkedIn. “As an immigrant and as a CEO, I’ve both experienced and seen the positive impact that immigration has on our company, for the country, and for the world. We will continue to advocate on this important topic,” Nadella said.

Nadella also shared a memo from Microsoft’s chief legal officer Brad Smith, in which Smith revealed that at least 76 Microsoft employees are affected by Trump’s order. “But there may be other employees from these countries who have U.S. green cards rather than a visa who may be affected, and there may be family members from these countries that we haven’t yet reached,” Smith added. Smith said he and Nadella would answer employee questions during a question-and-answer session on Monday.

LinkedIn CEO Jeff Weiner noted that many Fortune 500 companies are founded by immigrants or their children, and wrote, “All ethnicities should have access to opportunity — founding principle of U.S.” (LinkedIn was acquired by Microsoft last year.)

In a memo obtained by TechCrunch, Apple CEO Tim Cook says the company has reached out to employees affected by the order. “In my conversations with officials here in Washington this week, I’ve made it clear that Apple believes deeply in the importance of immigration — both to our company and to our nation’s future,” Cook wrote. “Apple would not exist without immigration, let alone thrive and innovate the way we do.”

Uber CEO Travis Kalanick sent an email to his team on Saturday afternoon, noting that the order affected about “a dozen or so employees.” He also added that the company will identify and compensate drivers who may be barred from entering the US for the next 90 days pro bono “to help mitigate some of the financial stress and complications with supporting their families and putting food on the table.” It’s unclear how long it will take for Uber to identify these drivers, though. As a member of Trump’s business advisory group, Kalanick will meet with Trump next week.

You can read his full email below:

From: Travis Kalanick
Date: Sat, Jan 28, 2017 at 1:20 PM
Subject: Standing up for what’s right
To: Uber Team

Team,

Yesterday President Trump signed an executive order suspending entry of citizens from seven countries—Iran, Iraq, Libya, Somalia, Sudan, Syria and Yemen—to the United States for at least the next 90 days.

Our People Ops team has already reached out to the dozen or so employees who we know are affected: for example, those who live and work in the U.S., are legal residents but not naturalized citizens will not be able to get back into the country if they are traveling outside of the U.S. now or anytime in the next 90 days. Anyone who believes that this order could impact them should contact [email protected] immediately.

This order has far broader implications as it also affects thousands of drivers who use Uber and come from the listed countries, many of whom take long breaks to go back home to see their extended family. These drivers currently outside of the U.S. will not be able to get back into the country for 90 days. That means they will not be able to earn a living and support their families—and of course they will be separated from their loved ones during that time.

We are working out a process to identify these drivers and compensate them pro bono during the next three months to help mitigate some of the financial stress and complications with supporting their families and putting food on the table. We will have more details on this in the coming days.

While every government has their own immigration controls, allowing people from all around the world to come here and make America their home has largely been the U.S.’s policy since its founding. That means this ban will impact many innocent people—an issue that I will raise this coming Friday when I go to Washington for President Trump’s first business advisory group meeting.

Ever since Uber’s founding we’ve had to work with governments and politicians of all political persuasions across hundreds of cities and dozens of countries. Though we share common ground with many of them, we have had areas of disagreement with each of them. In some cases we’ve had to stand and fight to make progress, other times we’ve been able to effect change from within through persuasion and argument.

But whatever the city or country—from the U.S. and Mexico to China and Malaysia—we’ve taken the view that in order to serve cities you need to give their citizens a voice, a seat at the table. We partner around the world optimistically in the belief that by speaking up and engaging we can make a difference. Our experience is that not doing so shortchanges cities and the people who live in them. This is why I agreed in early December to join President Trump’s economic advisory group along with Elon Musk (CEO of Tesla), Mary Barra (Chairwoman/CEO of General Motors), Indra Nooyi (Chairwoman/CEO of Pepsi), Ginni Rometty (Chairwoman/CEO of IBM), Bob Iger (Chairman/CEO of Disney), Jack Welch (former Chairman of GE) and a dozen other business leaders.

I understand that many people internally and externally may not agree with that decision, and that’s OK. It’s the magic of living in America that people are free to disagree. But whatever your view please know that I’ve always believed in principled confrontation and just change; and have never shied away (maybe to my detriment) from fighting for what’s right.

Thanks,

Travis

Tesla and SpaceX CEO Elon Musk added his comments via Twitter late on Saturday afternoon. They were not very strongly worded.

In a statement to TechCrunch, a Tesla spokesperson added, “We hope that this temporary action by the Administration transitions to a fair and thoughtful long-term policy.” A small number of Tesla employees are affected by the order.

Twilio CEO Jeff Lawson calls the ban “fundamentally UnAmerican” in a blog post today. “Yesterday marked a solemn day for the United States, as we’ve betrayed one of our most cherished values. For over 200 years, the promise of America has been freedom from oppression and opportunity for those in need. While we’ve made mistakes along the way, we’ve always come to regret relinquishing our values to xenophobia,” Lawson writes. “Yesterday, that beacon of hope and freedom was extinguished, exactly when humanity needs it the most. Globally there are over 60,000,000 displaced people, more than any time since World War II. And today we turned our backs on them.”

In an emailed statement, Mozilla CEO Chris Beard said that he believes that “The immigration ban imposed by Friday’s executive order is overly broad and its implementation is highly disruptive to fostering a culture of innovation and economic growth.” He added the he believes that “The ban will have an unnecessary negative impact to the health and safety of those affected and their families, not to mention rejecting refugees fleeing persecution, terror and war.”

Here is his full statement:

“The immigration ban imposed by Friday’s executive order is overly broad and its implementation is highly disruptive to fostering a culture of innovation and economic growth.

By slamming the door on talented immigrants –including those already legally in the United States and those seeking to enter – the ban will create a barrier to innovation, economic development and global impact. Immigrants bring world class skills and expertise to build advanced technology that can improve the lives of people everywhere. The ban will have an unnecessary negative impact to the health and safety of those affected and their families, not to mention rejecting refugees fleeing persecution, terror and war.

The executive order ignores the single truth that we have come to know;  talented immigrants have had outsized contributions to the growth and prosperity of the United States and countries around the world. Diversity in all of its forms is crucial to growth, innovation and a healthy, inclusive society.  

We recognize the rights of sovereign nations to protect their security, but believe that this overly broad order and its implementation does not create an appropriate and necessary balance. It’s a bad precedent, ignores history, and is likely to do more lasting harm than good. 

Here is Netflix CEO Reed Hastings on Facebook:

Salesforce did not provide us with a statement, but sent us a link to the following tweet:

Several major tech companies have moved to ingratiate themselves with the Trump administration in recent weeks. SpaceX and Tesla CEO Elon Musk, who voiced opposition to Trump during the election season, recently accepted a role advising Trump on economic policy along with Uber CEO Travis Kalanick. Oracle CEO Safra Catz took a position in the Trump transition team last month.

SpaceX, Tesla, and Oracle have yet to respond to questions about how Trump’s executive order will impact their businesses. We will update this post as we receive more comments and statements.

It’s worth noting that a number of other tech CEOs and luminaries have also been outspoken about the ban. Box CEO Aaron Levie, for example, took to Twitter to voice his displeasure with the ban.

“We’re very much against the ban and will be working to both protect our employees but also work to make it clear that this is unacceptable and fight it however possible,” Levie also told us in an email.

Twitter CEO Jack Dorsey criticized the executive order and linked to a statement from the Internet Association, an advocacy group that represents many major tech companies.

Dorsey is also the CEO of Square. The payments platform issued a statement noting the “contributions of our immigrant-owned small businesses” to the nation’s economy.

Airbnb CEO Brian Chesky offered a brief statement on Twitter that obliquely referenced Trump’s executive order:

Dropbox CEO Drew Houston called Trump’s executive order “un-American.”

Y Combinator‘s Sam Altman today also took to his blog to express his views and implore tech companies to take a public stand. “It is time for tech companies to start speaking up about some of the actions taken by President Trump’s administration,” he wrote and later on added that “if this action has not crossed a line for you, I suggest you think now about what your own line in the sand is.  It’s easy, with gradual escalation, for the definition of ‘acceptable’ to get moved.  So think now about what action President Trump might take that you would consider crossing a line, and write it down.”

Altman did not address the role Y Combinator partner Peter Thiel is playing in the Trump administration. Altman previously defended his decision to keep Thiel as a YC partner after calls for him to be removed from his role at YC.

Fog Creek CEO Anil Dash called on tech employees to pressure their bosses to take a stand on immigration. Dash published a form letter employees could send to their CEOs:

Meanwhile, investor Chris Sacca is matching donations to the ACLU (and he has upped his offer to match donations to $75,000 now):

Former Facebook CTO and founder of the Salesforce-acquired Quip Bret Taylor, too, is joining the ACLU donation bandwagon:

Etsy CEO Chat Dickerson, in response to Re/Code’s Kara Swisher, said that he opposes “excluding people from US based on their nationality or religion, period.”

Featured Image: Natasha Japp Photography/Getty Images

Link : https://techcrunch.com/2017/01/28/tech-companies-react-to-immigration-ban/?ncid=rss