You May Also Like
Ironies of Luck https://t.co/5BPWGbAxFi— Morgan Housel (@morganhousel) March 14, 2018
"Luck is the flip side of risk. They are mirrored cousins, driven by the same thing: You are one person in a 7 billion player game, and the accidental impact of other people\u2019s actions can be more consequential than your own."
I’ve always felt that the luckiest people I know had a talent for recognizing circumstances, not of their own making, that were conducive to a favorable outcome and their ability to quickly take advantage of them.
In other words, dumb luck was just that, it required no awareness on the person’s part, whereas “smart” luck involved awareness followed by action before the circumstances changed.
So, was I “lucky” to be born when I was—nothing I had any control over—and that I came of age just as huge databases and computers were advancing to the point where I could use those tools to write “What Works on Wall Street?” Absolutely.
Was I lucky to start my stock market investments near the peak of interest rates which allowed me to spend the majority of my adult life in a falling rate environment? Yup.
It was pretty simple to do—Apple Time Machine backups let me do it with one click.
That first tweet captures, in two pictures, how badly Apple has “lost the plot” (to quote @wylieprof). On the right is the Apple MagSafe adapter, from 2013. On the left, what I had “upgraded” to.
Thanks, Apple! I really was nostalgic for worrying about yanking my computer off the table.
Oh and I really appreciated not knowing if my computer was charging. What was great was the little whoop sound you used, so that the speaker before me could be informed I was charging my laptop.
2 Research conditions are theoretical and/or idealized. A huge problem for so-called NLP or AI startups with highly credentialed academic founders is that they bring limited knowledge of what it takes to build real products outside the lab.
3 A product is ultimately a thing that people pay for - not just cool technology or user experience. But I’m not even talking about knowledge gaps in go-to-market work. I'm talking purely technical gaps: how you go from science project to performant + delightful user experience.
4 Most commoditized NLP packages solve well-understood problems in standard ways that sacrifice either precision or performance. In a research lab, this is not usually a hard trade-off; in general, no one is using what you make, so performance is less important than precision.
5 In software, when you’re making something for real people to use, these tradeoffs are a big deal. Especially if you’re asking those people to pay for what you’ve made (can’t get away from that pesky GTM thinking). They expect quality, which includes precision AND performance.
It's all in French, but if you're up for it you can read:
• Their blog post (lacks the most interesting details): https://t.co/PHkDcOT1hy
• Their high-level legal decision: https://t.co/hwpiEvjodt
• The full notification: https://t.co/QQB7rfynha
I've read it so you needn't!
Vectaury was collecting geolocation data in order to create profiles (eg. people who often go to this or that type of shop) so as to power ad targeting. They operate through embedded SDKs and ad bidding, making them invisible to users.
The @CNIL notes that profiling based off of geolocation presents particular risks since it reveals people's movements and habits. As risky, the processing requires consent — this will be the heart of their assessment.
Interesting point: they justify the decision in part because of how many people COULD be targeted in this way (rather than how many have — though they note that too). Because it's on a phone, and many have phones, it is considered large-scale processing no matter what.