I was watching an old episode of Deadliest Catch recently and one of the captains said he’d staff the boat with robots and run it from Seattle if he could. Naturally, that got me thinking…
First, could robots crab fish? Most of the process could be automated pretty easily, it’s a lot of repetitive actions. But, they are complicated by rolling seas and winter storms. So first order of business, the robots would need to be able to carry out their tasks on a shifting surface that isn’t always level. Or dry.
The two other tasks that, while repetitive, require some form of human cognition are throwing the hook and sorting the crab.
Throwing the hook could potentially be done by a pneumatic launcher that reels the line back in. Or by a specially designed robot. Or something else. But the tool will need to judge the distance to the line to be hooked. Launching the same distance every time seems inefficient and carries the common issue of a “standard” distance needing to be chosen, what happens when an outlier crops up? The entire system will need to be able to adapt and handle missed throws that may require turning around. Crab fishing isn’t just an assembly line on the ocean.
These seem like minor quibbles though, and it’s the crab sorting that interests me most.
Crabs must be of a certain size, gender, and type. A crab is plucked from a pile and checked to make sure it is a) the right species, b) male, and c) at least the minimum size or larger. If the crab meets all three criteria it is kept, if it doesn’t it is returned to the ocean. Sounds like a problem for computer vision! A system could be trained on pictures of different crab species and genders and then set look for only males of the species being fished. Then the crab could be measured for size and compared to the minimum allowable as the final yes/no criteria. You could even develop different systems for each species since only one species is fished at once.
So, yeah, I think it’s feasible that we could have robot crab fishers one day.
While writing my post the other day about tech companies and government contracts for AI., I started to realize how it’s a strange situation. The brands don’t seem to mix with the goals of the contracts.
Microsoft is the most normal seeming of the bunch, they are more of a B2B company that has probably been powering government for ages now. The Hololens is a way for them to be the computing platform of the future and make a play for being the dominant OS again. I would guess it’s the most brazen military use they’ve gone after, but they’re really just looking to become the military’s computing platform in the field. It fits.
Amazon and Google though. Those seem a bit…off.
Google’s involvement in Project Maven fits in that Big G dabbles in everything. They probably saw some interesting AI problem that could potentially be solved by a massive amount of compute and realized they could be paid for it. Also, identifying objects in an image via computer vision is basically a search problem. And search is kind of Google’s thing.
Amazon’s facial recognition efforts and push is really strange for a consumer retail and logistics company. But it makes a bit more sense if you view it as an AWS project. Even still, it just seems like a strange choice. Amazon is all about owning all aspects of the customer experience and positioning themselves as the only resource a consumer needs when it comes to shopping. Even AWS could be seen as a platform to power other companies in their serving of customers. Essentially getting a cut of any consumer activity that doesn’t happen on their platform. Rekognition is a weird fit. And for a company so focused on its reputation and having positive associations for customers, it seems like more of a potential liability than a win.
Ultimately these tech giants are becoming the new GEs. We may know them for a few specific things, but they appear to have designs on making money in every feasible way possible. Of course, that hasn’t worked out so great for GE in the long run so this could get interesting.
There has been a trend of sorts lately around tech giant employees boycotting their employers’ activities related to military and government contracts within the realm of AI.  I get it, none of these people signed up to make weapons. This is putting aside the fact that not all of the activities that have been boycotted were direct weapons projects. But still, they probably took these jobs wanting to improve peoples’ lives, not take them.
But! I would argue these are the kinds of people we want developing these technologies. Especially the iterations that involve features with the terms “intelligence” or “intelligent” attached to them. I want people with moral hang ups about this tech’s usage having an active voice in their development process.
Also, these is a long history of milestone tech achievements being directly related to military R&D. Autonomous vehicles, it could be argued, are in part due to DARPA. Google Maps can thank the military’s Global Positioning System for being possible. And, oh yeah, the internet! Basically, this isn’t the first time that tech and war have intermingled.
I fear what gets lost in the decision making process of these individuals is that just because they say “no” doesn’t mean the military is just going to say “well, we tried” and walk away. Nor are other countries, friend or foe, going to take any notice of our hesitancy and let it guide their decisions. This means we could be left with only people who experience no moral qualms about the work developing the next generation of smart weapons and technologies. And honestly, that thought terrifies me.
- Google. Amazon (not war and the potential use cases do freak me out). Microsoft.
Merry Christmas data nerds! 🎁
Congress has passed the OPEN Government Data Act, which should mean a bunch of new, shiny data to play with. And in a sector that could definitely benefit from what data analysis and machine learning could bring to bear. 📜
Find more datasets here.
Src: E Pluribus Unum
Facebook has open sourced their PyTorch based Natural Language Processing modeling framework. According to them it: 👇
blurs the boundaries between experimentation and large-scale deployment.
Looking forward to trying this out. 🤓
So China is really, really good at facial recognition algortihms. Like best in the world good. I wonder what they might use this for? 🤔
Maybe something like this:
Move over facial recognition, there’s a new camera-based identification technology on the scene. Welcome gait recognition. 🚶♂️
Chinese company Watrix can now identify and track people based on the way they walk. So just walk like Monty Python all the time? Nope, won’t do the trick. 🙅♂️
According to Haung Yongzhen, the CEO of Watrix: “You don’t need people’s cooperation for us to be able to recognize their identity. Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”
It appears that China is going to lead the way in all forms of tracking and recognition so it can all be bundled up in a citizen
control monitoring system. Big Brother is watching indeed. 👁️
Src: Outer Places
Cash does rules everything around us. 💰
There seems to be a trend amongst Chinese tech companies to deflect when asked about what societal implications their tech could have by shrugging and talking dollar signs. 💲💲💲
“We’re not really thinking very far ahead, you know, whether we’re having some conflicts with humans, those kinds of things,” [SenseTime co-founder Tang Xiao’ou] said. “We’re just trying to make money.”Src: Bloomberg
Exhibit B: (Outerplaces)
According to Haung Yongzhen, the CEO of Watrix: “You don’t need people’s cooperation for us to be able to recognize their identity. Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.”Src: Outer Places
“We don’t support the government,” [Su Qingfeng, the head of ZTE’s Venezuela unit,] said. “We are just developing our market.”Src: Reuters
I find it interesting that state supported companies in a Communist country keep using Capitalism as a shield. 🛡️
The LawyerBot 3000 might soon be a reality thanks to Harvard. They have digitized over 6 million cases to aid in the development of AI systems for the legal sector. So fire up your NLP and get ready to object! ⚖️
Src: Caselaw Access Project
Computer vision is the engine behind China’s Panopticon, and SenseTime is the engine behind many of these computer vision capabilities. And a lot of what they have developed has a dystopian feel to it with hidden cameras scanning faces and more and triggering the appropriate actions via AI. 📹
“That’s really how they see future interactions,” says Jean-François Gagné, who runs Canadian startup Element AI Inc. “You don’t need to log in to your computer, you don’t need to get a boarding pass, you don’t need to do anything anywhere. You’re just recognized.”
Not gonna lie, that does sound pretty cool. No more remembering passwords or tickets, no panicked pocket checking. But that glosses over the tyrannical implications of the tech as well, like freezing a dissident out of everything based on their face. 👤