State of Oregon’s Talent Development for Artificial Intelligence in a Post-Pandemic World is largely useless and very frustrating read.
Lost your job to AI? "Learn to code" says State of Oregon
Oregon’s Workforce and Talent Development Board (WTDB) Artificial Intelligence (AI) Task Force was established in 2019 with the goal of increasing Oregon’s AI talent pool. Not unsurprising for a State of Oregon report, it totally missed the mark, is missing key lessons about the state and future of AI, and devotes a handful of space to completely useless political ambitions of Democrats instead of looking at the frank reality of the situation. I’m not sure if I should characterize this as a “plug your ears and pretend you’re in a different reality” or a “learn to code” type insult to tech workers and minorities.
Like so many reports I read from people outside of tech, when you’re inside of tech it’s hard to look past the glaring inaccuracies, technical assumptions, and a really silly projection that our magic wizards in the technical space have skills that can be bestowed upon everyone. Comparisons to magic are rather apt, as Clarke's third law about magic seems to apply given how little the public seems to know about AI, but it’s not actually magic, it’s incredibly complex mathematics and science that drive AI.
First let’s consider some of the aspects of this report, which can be found here and is dated October 2020.
The first part of the report provides a simple Executive Summary and an oversimplification of what AI is and does, without providing any sort of dissection of its holistic capabilities or dependencies. It’s framed with the Hollywood-esque “intelligence similar to humans” qualifier and doesn’t parse important differences like how computer vision, or deep learning, are enormously more powerful than human capabilities currently. Most importantly, the definitions and examples provided in the introduction are limited to Artificial Intelligence, but littered throughout the report is a conflation of AI with Automation, Augmentation, and Robotics. These are different technologies, and while they can interact together, a synergistic convergence of these technologies in a method that is easy, cheap, and accessible has always been decades away. For example, most people have smart phones, smart phones have voice activated AI (for example Siri & Alexa), but good luck training Siri or Alexa to prepare you dinner. Hypothetically you can set up integration between one of these AI’s and a food delivery service, but most people don’t seek integration like that because you could inadvertently have a $60 meal show up at your doorway. AI can augment your abilities by finding you a recipe, but it’s still on you to cook the food. Someday Siri/Alexa will work with your oven or refrigerator to help you make dinner selections, but not only is this a decade away, but preparing dinner will still require human intervention in terms of physical labor like chopping and preparing. Someday, possibly, people will be able to afford a robot to do that chopping, but we’re talking 30~50 years from now before they’re cheap enough for middle class people to have in their homes. Even in the industrial space, robotics projects are not nearly as successful as industry-outsiders would like to believe - for example, the other day Walmart announced it was terminating it’s inventorying robot. They’re going back to human workers. These sort of practical hurdles and social limitations are not discussed at all – nor is there a dissection on why businesses are hung up on AI, and what modern AI projects actually look like. I understand this would be too lengthy for the report, but these are important aspects to understand before diving into “possibilities with AI” that are frankly not possible for decades upon decades to come.
Back to the actual report: pages 8 through 21 is hot garbage. First there is a consideration of COVID19 on AI’s acceleration, unfortunately the authors were unable to differentiate marketing materials from actual reality – citing sponsored, paid content published in MIT Technology Review in April 2020 by Faethm (an AI company trying to sell it’s relevancy). In reality, COVID19 and the resulting work from home, hasn’t truly impacted the trajectory of AI in any meaningful way. The actual underlying report about COVID19 impacting America’s workforce from MIT looks at suitability for remote workforce and automation (not simply AI), identifying that Cashiers & Retail are most at risk. This isn’t exactly new information, Cashiers have had the writing on the wall since the 1980’s introduced bar code scanners – self checkout has been around for 20+ yeas (and turns out, most people don’t like them). Underpinning this MIT study isn’t simply COVID19 or AI, but that increased technologies (all around, like the internet and smart phones) will continue disrupting industries like Retail and Healthcare. Just weeks ago I had my first virtual doctors appointment and it felt like “Finally, I’m in the 21st century!” But, to get an appointment it was still necessary to call them, the video conferencing software they used was dogshit, and overall many of the processes were still manual. Practically speaking we’re still in a bullshit world where we can’t fully embrace telecommute lifestyle even with these technologies being around for 20 years. For fuck sake, I had to provide a Fax number to a client the other day and I explained “Please email me once you’ve sent the fax.” If Fax hasn’t died, we can’t pretend that what’s happening today with COVID is the new Post-COVID lifestyle…. this is all temporary.
Back to the report: after the COVID subsection is a reasonable section on opportunities regarding AI and Climate Change, but then the report stumbles into a section of “Inequality” and “Diversity” considerations to plug into Governor Brown’s horseshit Oregon Equity Framework that essentially states “We’re really concerned about minorities, so keep voting for us.” WTDB’s AI report awkwardly notes the fully expected song-and-dance of “communities of color are disadvantaged” without really wanting to look too deeply into the issue and providing substantive analysis. It’s just skin-deep enough to make white liberals feel happy that we’ve yet again acknowledged minorities exist, so pat-on-back and move on.
A more honest evaluation here would have noted the long standing and very public criticism that Tech (and specifically SaaS) as a business sector, and STEM as a field, is not inclusive to minorities or women. But hey, this report wants Oregonians to have good jobs, so we’re ignoring that, pretend these new AI jobs will be available for everyone! There’s another issue that AI & Automation isn’t going to impact every Person of Color equally, such as the findings in the Metropolitan Policy Program (cited by this report on page 24) – which again looks at Automation and not specifically AI – and finds that men, youth, and Hispanic workers are most likely to be impacted by automation. While I think these findings are pretty dubious and speculative, Metropolitan suggests there’s an unequalness:
“Hispanic workers, for instance, account for 15.5 percent of the American labor force and yet represent 32.6 percent of the workforce in construction and extraction trades.” … “Black workers’ slightly lower average automation potential is accounted for by their overrepresentation in health care support and protective and personal care services, jobs which on average have under 40 percent current-task automation susceptibility.”
Though when we dig into the nuts and bolts here, we’re still talking about overwhelmingly white people being impacted in terms of raw quantity – particularly in the sector of “office and administrative support” which is dominated by whites and will be heavily impacted by AI and automation – Oregon’s paper didn’t even bother noting this - because again, let’s just go skin deep here.
There’s this new trend of urban liberal thinking to lump some section of minorities together into to make a statement about “BIPOC” or “Communities of Color” to fit a political narrative, but when it comes to actually addressing the issue, certain people “of color” are entirely left out of the solutions. I think that’s what is going on here: we need to construct that “People of Color” are at risk, but don’t dive too deep into it. In 2020 Oregon decided to embrace a brand new form of racist discrimination by actively ignoring Hispanics and Native Americans while rewarding Blacks – which is exactly what “intersectionality” by government will predictably bring us: a new form of discrimination rather than equality. I thoroughly expect this trend to continue of Portland & Oregon leaders making a point about “BIPOC” by selectively lumping in Latinos and Native Americans, and then ignore Asians, Latinos, Native Americans and other groups when crafting policy. I suspect this is why the State treats the term “minorities” so separately today, because it doesn’t fit the narrative that some subgroups should be more rewarded than others.
However, on a positive note and opposite note, the report does make mention of “minorities” and then the racially-blind term “low income” and “low skill” and how this large segment of our population will be impacted. It’s really easy for us to tabulate and be objective about demographics impacted when we look at statements like “making less than $15 an hour” or “high school or less” education. Here we could do real analysis on how AI will impact these people, but the report choses not to and gives us contradicting answers. First acknowledging that these people will likely be left behind, and then suggests that AI will only impact “human labor at higher points on the skill ladder” such as healthcare workers. The paper also notes that the European Union believes in “upskilling the workforce” by hiring “the best educators and scientists” --- whatever the fuck that means --- as if the State of Oregon is contemplating a program to send Hispanic dishwashers (who lost their job due to COVID closures) to Portland Community College and learn Statistical Data Modeling with Python/R? It’s also suggested in other parts of this report that perhaps that better method is to help these low-skill workers with understand how to work alongside AI, as if to take instructions from their new Chat Bot supervisor will be difficult.
The AI Strategies and Outlook on page 21 really provides the most valuable aspect of this report through a brief synopsis of other government’s strategy and putting all of this in an international context. For example:
“Finland has launched a free six-week online course in to educate its citizens and has made the course accessible for anyone in the world to take it at no cost.”
Which is cool, good job Finland – too bad Oregon doesn’t have the desire or interest to emulate something like that. While touting international perspective, this report also dramatically understates the ambitious Chinese plan to be the unsurpassed dominators of the AI world, this probably isn’t going to happen and is overblown, but the Communist Party considers this a national security issue and has mobilized significant resources.
There’s an unstated issue in this whole paper about global competition in this space. After-all, Oregon tech workers have known for almost two decades (and perhaps much longer) that we’re entirely disposable to out of state and international tech workers. Companies HQ’d in Oregon and businesses with simply an office in Portland have easily put aside locals for hot tasty talent out of the Bay Area or Seattle. Just the spicy aroma of Puget Sound and Bay Area sea breeze makes them more desirable I guess. How will we compete in this space? No idea.
There’s also a seriously real issue of why high-skill tech workers would want to stick around in a place like Portland when they can make 20% more in Seattle or 40% more in the Bay Area – however COVID is changing these dynamics, it might pencil out differently. Either way, Oregon’s AI Report doesn’t have an answer or consideration for H1B’s or considers how or why Portland tech workers are going to compete against Indian, Chinese or Bay Area AI workers. Part of this might be because this whole paper hasn’t decided if this is about training workers to accommodate AI, or training workers to build AI.
On Page 23 is probably the single most important statements in this document:
“For many years, businesses using AI have stated that the lack of skilled AI practitioners is the most important barrier to developing and deploying the technology. A survey from the Chinese technology company TenCent estimated that millions of AI jobs are available worldwide, but the AI experts actually capable of doing those jobs number only 300,000.”
There’s so much to unpack about all of this statement. The first question is why is there such a huge gap between available jobs and workers? To outsiders this is a big mystery, it’s the same mystery as to why Cyber Security roles can’t be filled - but it’s not a mystery at all to people in Tech. This shit is really, really hard and complicated. It’s work that a significant number of people don’t want to do, and aren’t cut out to do. That’s what it comes down to. Working in fields to build AI is just as complex as being a physician, rocket scientist, or engineer. You can’t just shove a certificate into someone’s hands and expect them to turn around and be productive. Moreover, while every business wants these incredibly valuable and skilled professionals, the hard reality is they don’t want junior-level people. Even before COVID there was hundreds of entry-level software developers, junior data scientists, and new security practitioners who struggle to find work in Portland, even though these career fields are ostensibly in high demand. Those “millions of AI jobs available worldwide” are all for people with 10+ years’ experience, with technology that’s really only been around only for 5 years, and that’s why there’s only 300,000 people to fill those roles.
People that are graduates from Code Schools are seeing the same problem, every competent business wants more software developers on their team, but very very few are willing to take a chance on junior level person. Even more challenging is the enormous competition among junior level workers, because a junior-level software developer position posted in a place like Portland, Oregon generates hundreds of people locally who are interested, then a few thousand across the country who are interested. I help junior level people get into tech damn near every day and it is incredibly brutal, especially with how exclusionary our tech community is.
Later (on Page 25) is a citation from Salesforce that can help us better understand why this skill gap exists – this is yet again a citation found in this document that has almost an entirely different tone than this State of Oregon piece – in Salesforce’s “What's the Future of Work in the Age of AI?” they offer statements like “Successful AI means enhancing — not replacing — the human workforce” & “New jobs for the ‘missing middle’” which are different than the perspective the State of Oregon is offering. However the Salesforce article offers this salient point:
“Research shows that while the majority of business leaders (97%) say they will use automation and AI to augment worker capabilities in the next three years, only 3% of executives plan to significantly increase investment in skills development programs in the same period.”
This lack of training employees is a critical reason there’s millions of jobs available and no one to fill them. Deep in the document in the “Recommendations” is loosely worded acknowledgements of this problem and a vague suggestion that government should offer “incentives” for businesses - but it’s non-committal about what these should incentives should impact.
Page 25 – 40 goes on to list a variety of business “Sector-Based Impacts” of AI - but this whole subsection is entirely useless as it’s speculative, often without timelines, based upon conditions that may not become true, and at times contradictory or disjointed. To some degree the State of Oregon deserves some credit, as they actually cited real businesses locally who are attempting to implement these systems (not just vague statements from Forbes and wannabe entrepreneurs) but unfortunately does not weave a cohesive context. For example, the Agriculture subsection lists that 97% of the 34,600 Oregon farms are family operated, yet the report goes on to pretend that these family owned farms are capable of utilizing enterprise-grade technology and purchasing expensive hardware, when in reality most family farms in Oregon are barely scraping by. No doubt that some farmers will take advantage of software automation and AI as much as possible, however they’re still an enormous difference in software automation and robotics, especially in terms of capital cost. Moreover, most farmers want to reduce their operational costs, and barely have IT overhead. Oregon’s agricultural industries are synonymous with tech debt. The moment you deal with real farmers you can see how their capital purchasing decisions are made: fuck the computers, give me something reliable that I can fix on my own. The family farmers of Oregon are not going to rapidly embrace robotics. Equally they’re not going to happily hire IT people to help them weave some AI Magic that can help them grow their crops better. This same report, just a few pages later, notes a particular problem with Forest Products: “In Oregon, specifically with electricians, there is a statewide shortage of “plant electricians” that all industries are fighting to hire into their operations.” If Swanson lumber (with estimated revenue in $50+ million a year) can’t find electricians, how will family farmers find them to maintain their robotics?
The analysis on farms should be compared to Page 32 which does an analysis on manufacturing:
Small and medium manufacturers are being left behind in the Fourth Industrial Revolution. AI is a difficult area for these companies. The resource and data infrastructure requirements are heavy, and attempting to build the required technology is costly. Most manufacturers do not have the necessary skills and knowledge in-house.
It’s not simply “manufacturers” being left behind, these constraints exist for ALL BUSINESSES. As an insider I can assure everyone that every business is “interested” in advanced cool technology, until I come back with a 7-figure price tag and an application life-cycle and on-going expenses (in other words, that cool custom thing you built for $2.5 million dollars is going to need overhaul and maintenance). This is where rubber actually hits the road and these projects continue to be far too expensive even for most large enterprises in Oregon. It’s fallacy to presume that even large businesses have eagerly and successfully adopted AI and that these programs have staying power. What’s actually happening with a lot of advanced technologies is Proof of Concepts that attract a lot of attention but don’t pay out financially.
The pathway forward with AI isn’t each business building their own Deep Learning algorithm, it will be businesses licensing their AI to other businesses as a service. Still, it’s going to take expertise and professional services to install and utilize these complex analysis tools. Many of the statements made in this document are optimistic about what could happen in 10 to 20 years, not what will be happening in 2025.
Page 40 gets into the real disconnect of what business say, what government and academia hear, and what’s actually happening in reality. This statement on page 42 really stood out:
For example, an AI associate degree could be given to a student that has mastered statistics, basic data analysis, some computer programming in Python, and a commonly used AI platform such as Tensorflow. Such a person could be an “AI mechanic.” As another example, a certificate could be given for data preparation or data mining that would involve basic work in Excel or SQL. Programs like these would make excellent opportunities for retraining low-skilled workers that may have been displaced by either AI or the COVID-19 pandemic.
In other words, “Laid off due to COVID? Learn to Code.” The biggest farce of all is pretending that low-skilled workers have the personal attributes necessary to engage in a highly technical job – for example, a desire for continuous learning is absolutely critical. An inherent comprehension of complex mathematics is very important for software developers, most especially AI builders and database professionals. If you don’t have these personality characteristics, building software and working with data will be the most frustrating job you’ve ever had. The report even makes mention on page 41 on the changes necessary to existing academic paths to accommodate a future AI pathway with “many more required mathematics courses, four new AI courses, numerous additional AI electives” and are we pretending this is a pathway for low-skilled workers?
In addition, this report pretends that a community college’s own invention of a AI certificate would hold water with private employers looking for talent. Junior talent already has a tough time, it would be even tougher with a irrelevant certification from a shitty institution like Portland Community College. I really don’t have a big gripe against PCC, but having known hundreds of students who have gone through their programs, it’s been clear to me that PCC (and most pacific northwest colleges) have a difficult time producing work-ready talent. It would be infinitely better for a worker to pursue a technical certification from a widely recognized organization like CompTIA, Microsoft, Google, or Amazon. Code Schools are already having a tough time converting their graduates to employees, I have serious doubts PCC will be able to pump out anything resembling a “tech” worker reliably.
Page 45 gets into the policy recommendations which are frustratingly generic and non-committal. I recognize that reports like this are simply one step in a lengthy and slow journey for government, but it’s plainly clear most of this is never going to happen. For example, I love this bullet under Education & Training, “Work with the Oregon Education Department and the Oregon STEM Council to increase appropriate introductory AI, computer science and programming, and applied mathematics curricula for all Oregon P-12 students.” But, c’mon, what universe do these authors live in? Do they live in the same State of Oregon I do, where Oregon Education Department barely manages itself? Obviously government concludes “let’s use government” but reasonable Oregonians need to understand that we have agencies that are fundamentally broken. This is as absurd as me going to the police department and asking them for help reducing drugs in society - the solution cops come up with isn’t going to work.
The biggest problems is a narrow understanding of what AI is today, versus what it will become
This whole report reads with a tone where it’s really not sure what to think about AI or what Oregon should do with it. There’s a lot of editorializing of a narrative about AI, but the report really can’t commit to if Oregon should consider AI racist bias, or an opportunity for minorities.
It strikes me that the biggest piece of this is a foundational lack of where AI technologies are going. For example, this report simplifies the concept of Facial Recognition with visible light - essentially using AWS’s Rekognition service. This is just one tiny tiny fraction of what Facial Recognition actually is - because the computer is simply looking at photographic captures (or video) the same way a human eyes does. However your phone’s facial recognition is not powered by visible light, but likely infrared light, something humans are incapable of doing. There’s two prevailing problems with facial recognition bias: the first is that the learning algorithm can be initially trained with a limited set of minorities, and the other is that visible-light cameras have a difficult time with dark-skinned individuals simply due to their pigmentation. These real challenges will be overcome, especially as the big players in AI have recognized the necessity of training AI properly, but also due to changing technologies in how we capture images for the computer to read - as we move into infrared, thermal, and ultra high-definition these technologies will get better.
But the concept of Facial Recognition is only one simple product in a wave of upcoming biometric identity services that artificial intelligence offers. This same tech family of tools provides finger-print scanning, iris scanning, voice analysis, palm vein recognition, writing analysis, and much much more. The future of this technology is a machine that is able to identify exactly who you are with complete certainty - and we have this now, it’s just widely deployed. If we have these biometric tools currently, why aren’t they deployed everywhere? Facial Recognition has a full 1 page out of 45 pages in the report dedicated to it’s problems, yet the word “biometric” doesn’t exist. Equally the concept of “Computer Vision” doesn’t exist in this report, just the simplistic examples of the tools around today.
The perspective generally offered by non-technologists about AI is always at the Peak of Inflated Expectations in the Hype Cycle. It’s not actually based upon what people, society, or businesses are actually doing or what they’re likely to do.
I think this whole misunderstanding of AI and capabilities is largely due to laymen non-technologists assuming a world of possibilities that could come while not understanding that many or most of these things they’re imagining actually exist today and that these things are actively being rejected by people, society, and businesses.
For better or worse, a simple example of this is the City of Portland banning facial recognition. The City of Portland routinely makes very bad technology decisions, not least of which is that our same Mayor Ted Wheeler also supposed that 5G radiowaves are dangerous. Part of this is that urbanite liberal Portland likes to pretend we’re a bunch of intellectuals, but in reality we’re milquetoast fart-sniffing luddities who are conspiratorial anti-science fundamentalists - this is why we show up on Anti-Vaxx hotspot lists.
The reason City of Portland introduced is because they were egged on by activists who were primarily motivated by the fact that Jacksons Foodstores near downtown Portland would have the audacity to use Facial Recognition to exclude shoplifters. I met with these same activists prior to them taking it to the City and tried to explain that Safeway, Kroger, Target, and most grocery stores and many private businesses were already using this technology and will continue to, that if pressed they’ll just run them in secret. Which is predictably exactly what they did and now big grocery simply refuses to comment. Yet, walking into any Freddies, Safeway, or Target, and you will still find the tale-tell use of cameras at face height because they’re running facial recognition, and they’ll continue to see them.
I see City of Portland’s wading into regulating AI about as successfully as the State of Oregon will be able to encourage privates businesses to use it. I wouldn’t be at all surprised if the State and residents/politicians in Portland find themselves on opposite sides of the question of what to do with AI.
What should be done differently with the State of Oregon and AI.
I think there’s some very specific and simple solutions. To put it really simple, if the State of Oregon and urban liberals are going to continue to fall back on this “Learn to Code” concept, then let’s codify and make it accessible. Basically we need to stop fucking around with it as a personal responsibility or option, and instead make it much broader.
First this starts with employers. I don’t think we’re going to be able to incentivize employers to continue training their workforce proactively, but I do believe the State of Oregon could mandate that certain employers maintain continuing education programs for a subset of employees. In other words, if your company has X-number of employees, you need to have Y-percentage in a continuing education program or face Z-tax penalties. Most white collar industries already do this voluntarily, as almost all competent IT organizations require this, plus many companies offer “Work while you go to school” programs, and we simply need to dramatically increase this. This gap is most noticeable in low-wage jobs, and no surprise these are the industries that will be most impacted by automation. Practically speaking this policy needs to be aimed at large retailers, manual labor employers, food service workers, transportation, and factory workers have access to career pathways.
Next we need to be completely transparent that minimal software development is a requirement to graduate from public schools just as we require basic arithmetic and comprehension of the English language. The more we go down this pathway the more we will realize that this was actually significantly easier than we first imagined. Portland has the notorious Arts Tax, and for as bad as that policy is, it does ensure and mandate that their public schools will offer Arts programs to elementary school students. A similar program should be mandated to middle and high school students where every student will write a “Hello World” application before graduating. This will demonstrate to urban liberals unequivocally that, much like a music program, some people have talents and some people don’t have talents - and we’ll get a better understanding of who is cut out for what, while actually putting Oregon on the map of kids who understand technology. It’s kind of royally fucked up that family volunteers and groups like Microsoft are funding our public education, rather than the Department of Education realizing that computer languages might be inherently valuable for all people in just the same way that reading fucking Mark Twain or finger painting is. Unfortunately, as I wrote above, I have zero faith that Oregon’s Education bureaucrats are even remotely capable of pulling this off.
In these sort of “learn to code” responses to changing technology there is a very subtle but ever present disrespect for technology workers. The premise is basically: anyone can do what you can do, and fairly easily. It always fails to recognize that expertise in technology is just as difficult to obtain as expertise in the medical or engineering field. This is particularly insulting when talking about highly complex advanced technologies like AI and automation - to assume we can solve this problem by “workforce training” and teaching people to code is as lazy as saying we can find more Doctors and Physicians if we get more people EMT-certified.
Again, I recognize that this report isn’t meant to be some big ground-breaking analysis and official policy for the State of Oregon, I just found it to be laughably out of touch with what’s actually happening in the real world.
I understand that Oregon politicians ostensibly don’t want our businesses or workforce to be left behind, but equally we can’t plan for the future by totally misunderstanding the complexities around us.