A quick one for you. I have received reports, now from multiple sources, that the major torch bearer of the RISC-V platform, a company known as SiFive and formed from the original architects of the RISC-V instruction set, has gone through some major changes.
Ian Cutress muses upon rumors around SiFive, the forerunner of high-performance RISC-V cores.
maintains the open-ness and customization that RISC-V offers
Thinking about cybersecurity: does this kind of open-ness mean that some evil guys could now design some evil behaviour into the hardware, and no scanner software will ever be able to detect it, because it is only a software scanner?
it’s better to be transparent and let everyone analyze your design. the more eyes on it, the better. even the proprietary and obscured Intel CPUs have had security vulnerabilities in the past.
I don’t think it’s so much “security by obscurity” as it’s an issue of a much lower bar for chip production. Intentional back doors or malware represent a huge risk for a product line, so manufacturers won’t put them in without someone like the NSA leaning on them. It’s a simple risk/benefit calculation.
But the risk is much lower if you can snag a processor design off the 'net, make your modifications, send it off to a fab and sell it under a fly-by-night operation. If it’s ever discovered, you take the money and run.
I don’t see it as irrational. You’re thinking about it the wrong way round.
Manufacturers buy chips from proven sources, where the chip can be traced back to the fab that made it. The entire system of trust is built on the assumption that the chip designers and fabs are trustworthy and that the shady stuff happens elsewhere in the supply chain.
When the designers can’t be trusted, it breaks everything. Up until now it hasn’t been a problem except in extremely sensitive areas like military equipment - only governments can force a company to risk everything by compromising their own products. But take the risk away - make it cheap enough to design new microcontrollers - and what’s to stop a chip designer from taking money from (for example) the Russian mafia? IoT is huge, everywhere, and Risc-V is ideally suited for it.
Do you mean that someone can take the design, place a hardware vulnerability and sell it? Sure, but this does not require RISC V to be possible, there are already vulnerable CPUs sold on the market. People have found such vulnerabilities already in reputable Intel CPUs for example (look up Spectre).
iDRAC is specifically designed for remote management of serves. Calling it a back door is silly when it’s more of a front door. It’s how Dell intends for you to manage the server.
During the hey day I passed hcna-rs, the first thing we were taught was to just use telnet as a means to enable shh, then log back in and disable telnet.
Moral of the story, do not under estimate a nation state’s use of global tech media to effect a global drop of a product or manufacturer from the market.
I think a more appropriate example in fact is the Intel Management engine. The same as this dell idrac, but not meant for the user. It has been described as an actually intended backdoor.
Thinking about cybersecurity: does this kind of open-ness mean that some evil guys could now design some evil behaviour into the hardware, and no scanner software will ever be able to detect it, because it is only a software scanner?
That sounds like lots of extra work, when current CPU manufacturers built that hidden space in already. Intel Management Engine is a great example.
security through obscurity is a bad practice.
it’s better to be transparent and let everyone analyze your design. the more eyes on it, the better. even the proprietary and obscured Intel CPUs have had security vulnerabilities in the past.
I don’t think it’s so much “security by obscurity” as it’s an issue of a much lower bar for chip production. Intentional back doors or malware represent a huge risk for a product line, so manufacturers won’t put them in without someone like the NSA leaning on them. It’s a simple risk/benefit calculation.
But the risk is much lower if you can snag a processor design off the 'net, make your modifications, send it off to a fab and sell it under a fly-by-night operation. If it’s ever discovered, you take the money and run.
deleted by creator
I don’t see it as irrational. You’re thinking about it the wrong way round.
Manufacturers buy chips from proven sources, where the chip can be traced back to the fab that made it. The entire system of trust is built on the assumption that the chip designers and fabs are trustworthy and that the shady stuff happens elsewhere in the supply chain.
When the designers can’t be trusted, it breaks everything. Up until now it hasn’t been a problem except in extremely sensitive areas like military equipment - only governments can force a company to risk everything by compromising their own products. But take the risk away - make it cheap enough to design new microcontrollers - and what’s to stop a chip designer from taking money from (for example) the Russian mafia? IoT is huge, everywhere, and Risc-V is ideally suited for it.
Do you mean that someone can take the design, place a hardware vulnerability and sell it? Sure, but this does not require RISC V to be possible, there are already vulnerable CPUs sold on the market. People have found such vulnerabilities already in reputable Intel CPUs for example (look up Spectre).
Dell iDRAC comes to mind as well.
iDRAC is specifically designed for remote management of serves. Calling it a back door is silly when it’s more of a front door. It’s how Dell intends for you to manage the server.
That’s the same train of thought I had when telnet was declared a back door in huawei devices.
https://www.theregister.com/2019/04/30/huawei_enterprise_router_backdoor_is_telnet/
During the hey day I passed hcna-rs, the first thing we were taught was to just use telnet as a means to enable shh, then log back in and disable telnet.
Moral of the story, do not under estimate a nation state’s use of global tech media to effect a global drop of a product or manufacturer from the market.
LUL. So you’re right but one of the horror stories I tell around campfires is how many folks don’t know about that front door.
So how about we agree to “surprise feature” for iDRAC? And, yes yes, I can feel the “they shouldn’t be admins” coming.
It has to be enabled, right? So if someone enabling iDRAC doesn’t know that it exists…
The person enabling it isn’t always still at the company.
MFW a so-called cyber security researcher learns about IPMI
I think a more appropriate example in fact is the Intel Management engine. The same as this dell idrac, but not meant for the user. It has been described as an actually intended backdoor.
https://en.m.wikipedia.org/wiki/Intel_Management_Engine
deleted by creator
Don’t downvote this person, they’re just asking a question.