Back

Go Proposal: Secret Mode

236 points2 monthsantonz.org
Someone2 months ago

FTA: “Heap allocations made by the function are erased as soon as the garbage collector decides they are no longer reachable”

I think that means this proposal adds a very specific form of finalisers to go.

How is that implemented efficiently? I can think of doing something akin to NSAutoReleasePool (https://developer.apple.com/documentation/foundation/nsautor...), with all allocations inside a `secret.Do` block going into a separate section of the heap (like a new generation), and, on exit of the block, the runtime doing a GC cycle, collecting and clearing every now inaccessible object in that section of the heap.

It can’t do that, though, because the article also says:

“Heap allocations are only erased if the program drops all references to them, and then the garbage collector notices that those references are gone. The program controls the first part, but the second part depends on when the runtime decides to act”

and I think what I am thinking of will guarantee that the garbage collector will eagerly erase any heap allocations that can be freed.

Also, the requirement “ the program drops all references to them” means this is not a 100% free lunch. You can’t simply wrap your code in a `secret.Do` and expect your code to be free of leaking secrets.

asmor2 months ago

My guess is that it just uses the existing finalizers and ensures the memory is overwritten.

https://pkg.go.dev/runtime#SetFinalizer

tapirl2 months ago

SetFinalizer is deprecated by AddCleanup: https://pkg.go.dev/runtime#AddCleanup

AddCleanup might be too heavy, it is cheaper to just set a bit in the header/info zone of memory blocks.

tracker12 months ago

Same here, though I think that the GC will do a full run for all "secure" data that is clear regardless of GC timing.

tracker12 months ago

My guess is that the GC will eagerly wipe+free "secret" memory first and completely on the pass as opposed to deferring to the next GC cycle as is the normal use that may otherwise happen on longer GC cycles to keep overall performance characteristics happy.

This is just my own speculation, I don't know the internals of Go beyond a few articles I've read over the years. Somewhat more familiar with C#, JS and Rust, and even then I'll sometimes confuse certain features between them.

dpifke2 months ago

Related: https://pkg.go.dev/crypto/subtle#WithDataIndependentTiming (added in 1.25)

And an in-progress proposal to make these various "bubble" functions have consistent semantics: https://github.com/golang/go/issues/76477

(As an aside, the linked blog series is great, but if you're interested in new Go features, I've found it really helpful to also subscribe to https://go.dev/issue/33502 to get the weekly proposal updates straight from the source. Reading the debates on some of these proposals provides a huge level of insight into the evolution of Go.)

kmeisthax2 months ago

I have to wonder if we need, say, a special "secret data" type (or modifier) that has the semantics of both crypto/subtle and runtime/secret. That is to say, comparison operators are always constant-time, functions holding the data zero it out immediately, GC immediately zeroes and deallocs secret heap allocations, etc.

I mean, if you're worried about ensuring data gets zeroed out, you probably also don't want to leak it via side channels, either.

fsmv2 months ago

One thing that makes me unsure about this proposal is the silent downgrading on unsupported platforms. People might think they're safe when they're not.

Go has the best support for cryptography of any language

fastest9632 months ago

I'm not sure there's a realistic alternative. If you need to generate a key then it has to happen somehow on unsupported platforms. You can check Enabled() if you need to know and intend to do something different but I assume most of the time you run the same function either way you'd just prefer to opt into secret mode if it's available.

kbolino2 months ago

This is not what secret.Enabled() means. But it probably illustrates that the function needs to be renamed already. Here's what the doc comment says:

  // Enabled reports whether Do appears anywhere on the call stack.
In other words, it is just a way of checking that you are indeed running inside the context of some secret.Do call; it doesn't guarantee that secret.Do is actually offering the protection you may desire.
cafxx2 months ago

That's not how it's implemented (it returns false if you're inside a Do() on a unsupported platform), although I agree the wording should be clearer.

cafxx2 months ago

Filed a CL for this, hopefully it gets merged ~soon.

https://go-review.googlesource.com/c/go/+/729920

draw_down2 months ago

[dead]

awithrow2 months ago

Why not just panic and make it obvious?

kbolino2 months ago

One of the goals here is to make it easy to identify existing code which would benefit from this protection and separate that code from the rest. That code is going to run anyway, it already does so today.

samdoesnothing2 months ago

Does it? I'm not disputing you, I'm curious why you think so.

pants22 months ago

Not OP, but Go has some major advantages in cryptography:

1. Well-supported standard libraries generally written by Google

2. Major projects like Vault and K8s that use those implementations and publish new stuff

3. Primary client language for many blockchains, bringing cryptography contributions from the likes of Ethereum Foundation, Tendermint, Algorand, ZK rollups, etc

adastra222 months ago

Do you mean “best support for cryptography in the standard library”?

Because there is tremendous support for cryptography in, say, the C/C++ ecosystem, which has traditionally been the default language of cryptographers.

fsmv2 months ago

Yeah the standard library crypto package is really good and so is the tls package. There's also golang.org/x/crypto which is.seprate because it doesn't fall under the go compatibility guarantee. You can do all kinds of hashes and generate certs and check signatures and do aes encryption all built in and accessible. There's even lower level constant time compare functions and everything.

I'm a big fan of the go standard library + /x/ packages.

Mawr2 months ago

And since any language can call those C/C++ libraries, all languages are equally good at cryptography! Thanks for the "insight".

drowsspa2 months ago

4. The community seems to have realized that untangling the mess that is building C/C++ stuff is a fool's errand and seems to mostly prefer to reimplement it in Go

int_19h2 months ago

"The best" is still a strong claim. How does it stack up against Java or C#, for example?

tracker12 months ago

Guessing it's also taking sure to use assembly calls to zero out and clear the memory region as part of the GC... I would guess the clear/gc characteristics are otherwise the same, but having access to RAM in a non-supported platform could, in theory allow for stale reads of raw memory.

This is likely done for platform performance and having a manual version likely hinders the GC in a way that's deemed too impactful. Beyond this, if SysV or others contribute specific patches that aren't brute forced (such as RiscV extensions), I would assume that the go maintainers would accept it..

treyd2 months ago

> Go has the best support for cryptography of any language

This isn't true at all.

Writing cryptography code in Go is incredibly annoying and cumbersome due lack of operator overloading, forcinforcing you to do method calls like `foo.Add(bar.Mul(baz).Mod(modulus)).Mod(modulus)`. These also often end up having to be bignums instead of using generic fixed-size field arithmetic types. Rust has incredibly extensive cryptographic libraries, the low-level taking advantage of this operator overloading so the code ends up being able to following the notation in literature more closely. The elliptic_curve crate in particular is very nice to work with.

alanfranz2 months ago

I'd probably want some way to understand whether secret.Do is launched within a secret-supporting environment so that I'm able to show some user warning / force a user confirmation or generate_secrets_on_unsupported_platforms flag.

But, this is probably a net improvement over the current situation, and this is still experimental, so, changes can happen before it gets to GA.

pjmlp2 months ago

Nah, Java and .NET are much better, but they aren't fashionable.

tracker12 months ago

I think C# has been doing really well... I've appreciated the efforts to open the platform since Core... Though I do know a few devs that have been at it as long as I have that don't like the faster lifecycle since the move from Framework.

pjmlp2 months ago

Hello from one of them, while I appreciate the modernisation (with learnings from Midori as well), and officially going cross platform, it appears that some features are for the sake to keep the team size.

Most of our agency projects that have .NET in them, are brown field ongoing projects that mostly focus on .NET Framework, thus we end up only using modern .NET when given the opportunity to deliver new microservices, and the customer happens to be a .NET shop.

The last time this happened, .NET 8 had just been released, and most devs I work with tend to be journeyman they aren't chasing programming language blogs to find out what changes in each release, or online communities, they do the agency work and go home for friends, family and non programming related hobbies.

oncallthrow2 months ago

Meh, this is a defence in depth measure anyway

Edit: also, the supported platforms are ARM and x86. If your code isn’t running on one of those platforms, you probably know what you’re doing.

ctoth2 months ago

Linux

Windows and MacOS?

Go is supposed to be cross-platform. I guess it's cross-platform until it isn't, and will silently change the semantics of security-critical operations (yes, every library builder will definitely remember to check if it's enabled.)

YesThatTom22 months ago

If you need this for Windows so desperately why aren’t you offering to add support for that platform? It’s open source.

Many advanced Go features start in certain platforms and then expand to others once the kinks are worked out. It’s a common pattern and has many benefits. Why port before its stable?

I look forward to your PR.

baq2 months ago

Absolutely not the right take unless the OP is a security researcher

hypeatei2 months ago

> Meh, this is a defence in depth measure

Which is exactly why it should fail explicitly on unsupported platforms unless the developer says otherwise. I'm not sure how Go developers make things obvious, but presumably you have an ugly method or configuration option like:

  dangerousAllowSecretsToLeak()
...for when a developer understands the risk and doesn't want to panic.
kbolino2 months ago

This is a sharp-edged tool guarded behind an experimental flag. You are not meant to use it unless you want to participate in the experiment. Objections like this and the other one ("check if it's enabled" -- you can't, that's not what secret.Enabled() means) illustrate that this API may still need further evolution, which it won't get if it's never available to experiment with.

satellite22 months ago

Alternatively:

  enclave, err := secret.GetEnclave()
  // err contains whether the platform doesn't support it
  enclave.Do(f)
voodooEntity2 months ago

Ok, i kinda get the idea, and with some modification it might be quite handy - but i wonder why its deemed like an "unsolvable" issue right now.

It may sound naive, but packages which include data like said session related or any other that should not persist (until the next Global GC) - why don't you just scramble their value before ending your current action?

And dont get me wrong - yes that implies extra computation yada yada - but until a a solution is practical and builtin - i'd just recommend to scramble such variables with new data so no matter how long it will persist, a dump would just return your "random" scramble and nothing actually relevant.

raggi2 months ago

It is fundamentally not possible to be in complete control of where the data you are working with is stored in go. The compiler is free to put things on the heap or on the stack however it wants. Relatedly it may make whatever copies it likes in between actions defined in the memory model which could leak arbitrary temporaries.

to11mtm2 months ago

Yeah, .NET tried to provide a specific type related to this concept (SecureString) in the past and AFAIK there were were two main problems that have caused it to fall into disuse;

First one being, it was -very- tricky to use properly for most cases, APIs to the outside world typically would give a byte[] or string or char[] and then you fall into the problem space you mention. That is, if you used a byte[] or char[] array, and GC does a relocation of the data, it may still be present in the old spot.

(Worth noting, the type itself doesn't do that, whatever you pass in gets copied to a non-gc buffer.)

The second issue is that there's not a unified unix memory protection system like in windows; The windows implementation is able to use Crypt32 such that only the current process can read the memory it used for the buffeer.

evntdrvn2 months ago

In case you’re interested in a potential successor: https://github.com/dotnet/designs/pull/147

skywhopper2 months ago

Hard to understand what you’re asking. This is the solution that will practical and built-in. This is a summary of a new feature coming to Go’s runtime in 1.26.

willahmad2 months ago

without language level support, it makes code look like a mess.

Imagine, 3 level nesting calls where each calls another 3 methods, we are talking about 28 functions each with couple of variables, of course you can still clean them up, but imagine how clean code will look if you don't have to.

Just like garbage collection, you can free up memory yourself, but someone forgot something and we have either memory leak or security issues.

HendrikHensen2 months ago

With good helpers, it could become something as simple as

    key := make([]byte, 32)
    defer scramble(&key)
    // do all the secret stuff

Unless I don't understand the problem correctly.
kbolino2 months ago

There are two main reasons why this approach isn't sufficient at a technical level, which are brought up by comments on the original proposal: https://github.com/golang/go/issues/21865

1) You are almost certainly going to be passing that key material to some other functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle

2) The compiler can always keep some data in registers, and most Go code can be interrupted at any time, with the registers of the running goroutine copied to somewhere in memory temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine

So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what you intentionally preserve beyond its scope, while scramble wipes only what you think it is important to wipe.

voodooEntity2 months ago

Thanks, you brought up good points.

While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.

Tho i didn't think of (2) which is especially problematic.

Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.

nemothekid2 months ago

As I understand it, this is too brittle. I think this is trivially defeated if someone adds an append to your code:

    func do_another_important_thing(key []byte) []byte {
       newKey := append(key, 0x0, 0x1) // this might make a copy!
       return newKey
    } 

    key := make([]byte, 32) 
    defer scramble(&key) 
    do_another_important_thing(key)
    // do all the secret stuff

Because of the copy that append might do, you now have 2 copies of the key in data, but you only scramble one. There are many functions that might make a copy of the data given that you don't manually manage memory in Go. And if you are writing an open source library that might have dozens of authors, it's better for the language to provide a guarantee, rather than hope that a developer that probably isn't born yet will remember not to call an "insecure" function.
voodooEntity2 months ago

Yep thats what i had in mind

+1
ok1234562 months ago
compsciphd2 months ago

I could imagine code that did something like this for primatives

  secretStash := NewSecretStash()
  pString := secretStash.NewString()
  ....
  ....
  secretStash.Thrash()
yes, you now have to deal in pointers, but that's not too ugly, and everything is stored in secretStash so can iterate over all the types it supports and thrash them to make them unusable, even without the gc running.
mbreese2 months ago

I used to see this is bash scripts all the time. It’s somewhat gone out of favor (along with using long bash scripts).

If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.

    read -p “Password: “ PASSWD
    # do something with $PASSWD
    PASSWD=“XXXXXXXXXXXXXXXXXX”
It’s not pretty, but a similar concept. (I also don't know how helpful it actually is, but that's another question...)
voodooEntity2 months ago

Thats even better than what i had in mind but agree also a good way to just scrumble stuff unusable ++

compsciphd2 months ago

I'm now wondering with a bit of unsafe, reflection and generics magic one could make it work with any struct as well (use reflection to instantiate a generic type and use unsafe to just overwrite the bytes)

Thorrez2 months ago

> If an offset in an array is itself secret (you have a data array and the secret key always starts at data[100]), don't create a pointer to that location (don't create a pointer p to &data[100]). Otherwise, the garbage collector might store this pointer, since it needs to know about all active pointers to do its job. If someone launches an attack to access the GC's memory, your secret offset could be exposed.

That doesn't make sense to me. How can the "offset in an array itself" be "secret" if it's "always" 100? 100 isn't secret.

stingraycharles2 months ago

I think it may be about the absolute memory address to the secret being stored, which may itself be exploitable (ie you’re thinking about the offset value, rather than the pointer value). it’s about leaking even indirect information that could be exploited in different ways. From my understanding, this type of cryptography goes to extremely lengths to basically hide everything.

That’s my hunch at least, but I’m not a security expert.

The example could probably have been better phrased.

Thorrez2 months ago

I don't see how a single absolute address could be exploitable based on my understanding of the threat model of this library. The library is in charge of erasing the secrets from memory. Once the secrets have been erased from memory, what would an attacker gain from knowing an absolute address?

The only thing that makes sense to me is a scenario with a lot of addresses. E.g. if there's an array of 256 integers, and those integers themselves aren't secret. Then there's a key composed of 32 of those integers, and the code picks which integers to use for the key by using pointers to them. If an attacker is able to know those 32 pointers, then the attacker can easily know what 32 integers the key is made of, and can thus know the key. Since the secret package doesn't erase pointers, it doesn't protect against this attack. The solution is to use 32 array indexes to choose the 32 integers, not 32 pointers to choose the 32 integers. The array indexes will be erased by the secret package.

burnt-resistor2 months ago

Consumer-grade hardware generally lacks real confidentiality assurance features. Such a software feature implemented in user-space is moot without the ability to control context switching, rendering it mostly security theater. Security critical bits should be done in a dedicated crypto processor that has tamper self-zeroing and self-contained RAM, or at the very least, in the kernel away outside the reach of user-space processes. No matter how much marketing or blog hype is offered, it's lipstick on a pig. They've essentially implemented a soft, insecure HSM.

Big thumbs down from me.

thr0w4w4y13372 months ago

awnumar/memguard[1] exists and does even more

1) allocations via memguard bypass gc entirely

2) they are encrypted at all times when not unsealed

3) pages are mprotected to prevent leakage via swap

4) and so on...

Not as ergonomic as OP's proposal, of course.

[1] https://github.com/awnumar/memguard

jabedude2 months ago

How much control does a language runtime have over whether the memory controller actually zeros out physical memory? My guess is very little on consumer hardware but happy to be wrong

circuit102 months ago

If it's no longer readable by software that's at least far better than no protection, I imagine

robmccoll2 months ago

This is interesting, but how do you bootstrap it? How does this little software enclave get key material in that doesn't transit untrusted memory? From a file? I guess the attacker this is guarding against can read parts of memory remotely but doesn't have RCE. Seems like a better approach would be an explicitly separate allocator and message passing boundaries. Maybe a new way to launch an isolated go routine with limited copying channels.

cyberax2 months ago

> How does this little software enclave get key material in that doesn't transit untrusted memory?

Linux has memfd_secret ( https://man7.org/linux/man-pages/man2/memfd_secret.2.html ), that allow you to create a secure memory region that can't be directly mapped into regular RAM.

cafxx2 months ago

I find this example mildly infuriating/amusing:

    func Encrypt(message []byte) ([]byte, error) {
        var ciphertext []byte
        var encErr error
    
        secret.Do(func() {
            // ...
        })
        
        return ciphertext, encErr
    }
As that suggests that somehow for PFS it is critical that the ephemeral key (not the long-term one) is zeroed out, while the plaintext message - i.e. the thing that in the example we allegedly want secrecy for - is totally fine to be outside of the whole `secret` machinery, and remain in memory potentially "forever".

I get that the example is simplified (because what it should actually do is protect the long-term key, not the ephemeral one)... so, yeah, it's just a bad example.

kbolino2 months ago

PFS is just one of many desirable properties, and getting access to plaintext is just one of many kinds of threat. Getting access to ephemeral keys and other sensitive state can enable session hijacking. It's still not a great example, though, because it doesn't illustrate that threat model either.

baq2 months ago

    // Only the ciphertext leaves this closure.
This ideally should be describable by the type system.
maxloh2 months ago

> The new runtime/secret package lets you run a function in secret mode. After the function finishes, it immediately erases (zeroes out) the registers and stack it used.

I don't understand. Why do you need it in a garbage-collected language?

My impression was that you are not able to access any register in these language. It is handled by the compiler instead.

jerf2 months ago

This is about minimizing attack surface. Not only could secrets be leaked by hacking the OS process somehow to perform arbitrary reads on the memory space and send keys somewhere, they could also be leaked with root access to the machine running the process, root access to the virtualization layer, via other things like rowhammering potentially from an untrusted process in an entirely different virtual context running on the same machine, and at the really high end, attacks where the government agents siezing your machine physically freeze your RAM (that is, reduce the physical temperature of your RAM to very low temperatures) when they confiscate your machine and read it out later. (I don't know if that is still possible with modern RAM, but even if it isn't I wouldn't care to bet much on the proposition that they don't have some other way to read RAM contents out if they really, really want to.) This isn't even intended as a complete list of the possibilities, just more than enough to justify the idea that in very high security environments there's a variety of threats that come from leaving things in RAM longer than you absolutely need to. You can't avoid having things in RAM to operate on them but you can ensure they are as transient as possible to minimize the attack window.

If you are concerned about secrets being zeroed out in almost any language, you need some sort of support for it. Non-GC'd languages are prone to optimize away zeroing out of memory before deallocation, because under normal circumstances a write to a value just before deallocation that is never effectfully read can be dropped without visible consequence to the rest of the program. And as compilers get smarter it can be harder to fool them with code, like, simply reading afterwards with no further visible effect might have been enough to fool 20th century compilers but nowadays I wouldn't count on my compiler being that stupid.

There are also plenty of languages where you may want to use values that are immutable within the context of the language, so there isn't even a way to express "let's zero out this RAM".

Basically, if you don't build this in as a language feature, you have a whole lot of pressures constantly pushing you in the other direction, because why wouldn't you want to avoid the cost of zeroing memory if you can? All kinds of reasons to try to avoid that.

er4hn2 months ago

In theory it prevents failures of the allocator that would allow reading uninitialized memory, which isn't really a thing in Go.

In practice it provides a straightforward path to complying with government crypto certification requirements like FIPS 140 that were written with languages in mind where this is an issue.

kbolino2 months ago

Go has both assembly language and unsafe pointer operations available. While any uses of these more advanced techniques should be vetted before going to production, they are obviously able to break out of any sandboxing that you might otherwise think a garbage collector provides.

And any language which can call C code that is resident in the same virtual memory space can have its own restrictions bypassed by said C code. This even applies to more restrictive runtimes like the JVM or Python.

tracker12 months ago

That's kind of the point of making it (relatively easily) accessibly... in order to ensure that secrets are wiped from memory as quickly as reasonable. This reduces a lot of potential surface area for potential attack.

I used to not really see the need for this level of detail on things... then you see useful (IE in the wild exploits) for even complex factors like CPU branch prediction (for a while now), and the need starts to become much more clear.

hamburglar2 months ago

The Go runtime may not be the only thing reading your process’ memory.

kittywantsbacon2 months ago

This would potentially protect against other process reading memory via some system compromise - they would be able to get new secrets but not old ones.

vlovich1232 months ago

Go is not a memory safe language. Even in memory safe languages, memory safety vulnerabilities can exist. Such vulnerabilities can be used to hijack your process into running untrusted code. Or as others point out sibling processes could attack yours. This underlying principle is defense in depth - you make add another layer of protection that has to be bypassed to achieve an exploit. All the chains combined raise the expense of hacking a system.

tptacek2 months ago

Respectfully, this has become a message board canard. Go is absolutely a memory safe language. The problem is that "memory safe", in its most common usage, is a term of art, meaning "resilient against memory corruption exploits stemming from bounds checking, pointer provenance, uninitialized variables, type confusion and memory lifecycle issues". To say that Go isn't memory safe under that definition is a "big if true" claim, as it implies that many other mainstream languages commonly regarded as memory safe aren't.

Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".

https://www.memorysafety.org/docs/memory-safety/

vlovich1232 months ago

I’m not aware of any definition of memory safety that allows for segfaults- by definition those are an indication of not being memory safe.

It is true that go is only memory unsafe in a specific scenario, but such things aren’t possible in true memory safe languages like c# or Java. That it only occurs in multithreaded scenarios matters little especially since concurrency is a huge selling point of the language and baked in.

Java can have data races, but those data races cannot be directly exploited into memory safety issues like you can with Go. I’m tired of Go fans treating memory safety as some continuum just because there are many specific classes of how memory safety can be violated and Go protecting against most is somehow the same as protecting against all (which is what being a memory safe language means whether you like it or not).

I’m not aware of any other major language claiming memory safety that is susceptible to segfaults.

https://www.ralfj.de/blog/2025/07/24/memory-safety.html

Mawr2 months ago

Safety is a continuum. It's a simple fact. Feel free to use a term other than memory safety to describe what you're talking about, but so long as you use safety, there's going to be a continuum.

Also, by your definition, e.g. Rust is not memory safe. And "It is true that Rust is only memory unsafe in a specific scenario, but [...]". I hope you agree.

+1
tptacek2 months ago
+1
afdbcreid2 months ago
gethly2 months ago

Yeah, I can hardly disagree with that sentiment myself.

raggi2 months ago

This seems like it might be expensive (though plausibly complete), so I wonder if it’ll actually benchmark with a low enough overhead to be practical. We already struggle with a lack of optimization in some of the named target use cases - that said this also means there’s space to make up.

hamburglar2 months ago

Personally, I’m more interested in what a process can do to protect a small amount of secret material longer-term, such as using wired memory and trust zones. I was hoping this would be an abstraction for that.

nixpulvis2 months ago

I looked into this a bit for a rust project I'm working on, it's slightly difficult to be confident, when you get all the way down to the CPU.

https://github.com/rust-lang/rust/issues/17046

https://github.com/conradkleinespel/rpassword/issues/100#iss...

jeffrallen2 months ago

Wow, this is so neat. I spent some time thinking about this problem years ago, and never thought of such an elegant solution.

__turbobrew__2 months ago

> Protection does not cover any global variables that f writes

Seems like this should raise a compiler error or panic on runtime.

yyyk2 months ago

Aka .NET SecureString - which is barely used because everything accepts String.

teeray2 months ago

I wonder if people will start using this as magic security sauce.

tjpnz2 months ago

More likely they'll use it and end up with a false sense of security.

_1tan2 months ago

Seems neat, anything similar in Java?

leoh2 months ago

Kind of stupid it didn’t have something like this to begin with tbh. It really is an incredible oversight when one steps back. I am fully ready to be downvoted to hell for this, but rust ftw.

IshKebab2 months ago

Rust doesn't have anything like this either. I think you misunderstood what it is.

leoh2 months ago

Okay, fair point, sort of. Rust does not have a built-in feature to zero data. Rust does automatically drop references to data on the heap. Zeroing data is fairly trivial, whereas in go, the issue is non-trivial (afaiu).

  use std::ptr;
  
  struct SecretData {
      data: Vec<u8>,
  }
  
  impl Drop for SecretData {
      fn drop(&mut self) {
          // Zero out the data
          unsafe {
              ptr::write_bytes(self.data.as_mut_ptr(), 0, self.data.len());
          }
      }
  }
steveklabnik2 months ago

Zeroing memory is trickier than that, if you want to do it in Rust you should use https://crates.io/crates/zeroize

IshKebab2 months ago

He was pretty close tbf - you just need to use `write_volatile` instead of `write_bytes`.

rurban2 months ago

Zeroing data does not protect from sidechannel exfiltration. You really need to mfence it also. The zeroize crate also doesn't help there, it only does protect from wrong compiler dead block elimination.

raggi2 months ago

It doesn’t but the problem space is more constrained as you are at least in control of heap vs stack storage. Register clearing is not natively available though. To put it more simply: yes but you can write this in rust- you can’t write it in go today.

purplesyringa2 months ago

You can try to write it in Rust, doesn't mean you'll succeed. Rust targets the abstract machine, i.e. the wonderful land of optimizing compilers, which can copy your data anywhere they want and optimize out any attempts to scramble the bytes. What we'd need for this in Rust would be an integration with LLVM, and likely a number of modifications to LLVM passes, so that temporarily moved data can be tracked and erased. The only reason Go can even begin to do this is they have their own compiler suite.

+1
leoh2 months ago
+1
IshKebab2 months ago
leoh2 months ago

This is basically my point, in addition to the fact that the time at which data is freed from the heap is far more predictable.