TL;DR MIT researchers have developed an antitampering ID tag that is tiny, cheap, and secure. It is several times smaller and significantly cheaper than the traditional radio frequency tags that are used to verify product authenticity. The tags use glue containing microscopic metal particles. This glue forms unique patterns that can be detected using terahertz waves. The system uses AI to compare glue patterns and calculate their similarity. The tags could be used to authenticate items too small for traditional RFIDs.
Oh shit, it uses AI?!? Not just regular code? I’m in.
Still waiting to hear how blockchain factors in
The glue “patterns” are actually NFT apes
It uses dynamically cloudified functionalized AI models using an Agile setup.
Just from this comment alone my net worth has already skyrocketed to $2 Trillion.
Hey everyone this guy sounds smart, let’s make him a CEO
He lost me so I’m all in.
I’m more concerned with what Ja Rule thinks about this.
AI doesn’t seem necessary for comparing glue patterns
Not only is it unnecessary, it’s not happening! ;)
Before LLM’s, people would call if/else blocks AI.
You’d have to read the article to know what they’re getting at.
The use case provided was for businesses like a car wash that puts a sticker on a car windshield. The ML model would be able to detect if the customer attempted to transfer the sticker from one car to another.
A pretrained ML model to detect this is actually a very good use case.
However, I think the implimentation of this as an “anti-tampering detector” is a dangerous route to tread since there are other factors that need to be considered.
No, it uses quantum-computed blockchain hashes in order to contact the OpenAI servers to retrieve a decentralized, encrypted language model
I think my eyes just threw up from having to read that.
We made a tag that can’t be reliably and deterministically scanned so we also included a machine learning model that takes a good guess at it.
I just don’t see how you could possibly rely on a black box model for anything important. You have no way to mathematically prove if there are collisions in the model output or not, and newer versions of the model can’t be made backwards compatible. So if you have a database of thousands of these tags scanned, then they discover a critical vulnerability and provide a new model, you’re SOL and everything you have is worthless.
Can you imagine your house doorknob had to think about the shape of your key before letting you in, and then have the possibility of just saying “No. Not today.”?
If there were collisions in the output you’d see them while scanning those thousands of entries. And if they release a new model you can use it going forward and keep scanning the old items with the old one.
This happens in inventory sometimes, new technology comes out, you have to update asset tags.
Tell me you’ve never developed commercial security software without telling me. “If it works a few thousand times without collisions it should be reliable enough”. That’s not even good enough for tamper proof seals on medication and yogurt jars let alone applications that require the sender and recipient to use a dedicated tetrahertz scanner to validate.
… Damn AI fanboys smh
Nobody said anything about security applications, lol. It’s a proof of concept and you’re getting all worked up over complete hypotheticals. Where did you even get the idea that it would have collisions within thousands?
In security applications you need to account for actors with a marked interest in causing a collision but in an inventory scenario you simply generate IDs randomly until you get one that’s not a duplicate. There’s no problem using a hash algorithm with collisions if the probability is small enough. There are tons of scientific labs using MD5, btw.
It’s used to identify similarities in glue patterns. In what way wouldn’t this be backwards compatible? New versions would just be better at it.
To clarify what OP meant by his ‘AI’ statement
The system uses AI to compare glue patterns […]
The researchers noticed that if someone attempted to remove a tag from a product, it would slightly alter the glue with metal particles making the original signature slightly different. To counter this they trained a model:
The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.
It’s a good use case for an ML model.
In my opinion, this should only be used for continuing to detect the product itself.
The danger that I can see with this product would be a decision made by management thinking that they can rely on this to detect tampering without considering other factors.The use case provided in the article was for something like a car wash sticker placed on a customers car.
If the customer tried to peel it off and reattach it to a different car, the business could detect that as tampering.
However, in my opinion, there are a number of other reasons where this model could falsely accuse someone of tampering:
- Temperature swings. A hot day could warp the glue/sticker slightly which would cause the antitampering device to go off the next time it’s scanned.
- Having to get the windshield replaced because of damage/cracks. The customer would transfer the sticker and unknowingly void the sticker.
- Kids, just don’t underestimate them.
In the end, most management won’t really understand this device well beyond statements like, “You can detect tampering with more than 99 percent accuracy!” And, unless they inform the customers of how the anti-tampering works, Customers won’t understand why they’re being accused of tampering with the sticker.
I wish I could ban the term “AI” from public discourse.
Then you’re in for a bad time. It’s a game changer, even if over-hyped.
My problem is that “AI” is an overly broad term that leads people to conflate very different technologies. I just want people to use more specific language.
There’s a corporate initiative where I work that we’re going to offer AI in 2024. When I politely asked to expound on that, I was met with blank stares.
Like motherfucker do you realize even MS Teams uses AI for meeting transcription
“Offer Ai…for what?”
". . . we’re going to offer Ai. To. . .have. . .Ai. . . ."
I mean they could call it machine learning instead but that is just a type of AI.
Machine learning: we don’t know how it works AI: we don’t want you to know how it works
Exactly. They might as well write “magic” since it’s about as descriptive.
Oh, physical tag. I thought this was going to be about cryptographic data signing.
Terahertz is not utilized much yet due to cost.
Is that right?
Cost is somewhat relative at this scale. THz is just unstable over any normally expected usable distances since they are . At this scale, I assume they are thinking more like NFC replacement tech maybe? It’s definitely got more applications than just object tracking, but that seems to be it’s first test.
It’s updated RFID tags that are even harder to detect, and unlike the standard ones, require specialized technology to activate, so they get security through obscurity.
It has nothing to do with NFCs. Likely has a lot to do with the police state ID cards and vehicle tracking that the Bush Administation tried to implement in the late 2000s.
These tags should be smaller and cheaper, offloading tech to the scanners. Since a store uses lots of tags and only a couple scanners, this might make financial sense even if the scanners are more expensive as long as the tags are cheaper and enough of them are needed.
Of course it uses AI!
I read somewhere of a similar implementation using glitter mixed into clear nail polish. Take a close-up photo any time and visually compare with the original, no ML/AI model necessary
I smell glue