We Took a Deep Dive Into Cybersecurity & Crypto With Edge Wallet’s CEO

Paul Puey, Edge's CEO and the mind behind Airbitz, takes us on a journey where we venture through all the nuances of cybersecurity and cryptocurrency wallets.

At Blockshow Europe 2018, we had the chance to sit down with Paul Puey, the CEO of Edge—a multi-wallet platform that takes a unique approach to how it manages users’ private keys.

We caught him after a panel at which he spoke, called “Solution Session: Blockchain & Cybersecurity.” Sitting comfortably with a coffee in hand, Puey was ready to dive even deeper with us into the conversation surrounding cybersecurity and the cryptocurrency ecosystem.

CV: Tell me a little bit about this platform.

PP: Edge is a platform built to make securing private keys very simple and familiar to the end user, and solving probably the biggest problem in cryptocurrency loss, which is actually human error.

Edge was actually a re-brand and a re-factor of my original company, Airbitz, which was just simply a Bitcoin wallet that secured Bitcoin and allowed you to send and receive, buy, sell on a mobile device—iOS and Android—and it took that same fundamental principle of, “Let’s make keys very easy to manage.” And when we launched Edge, we turned that into both a multi-currency wallet that secures and sends and receives not just Bitcoin but many other currencies.

We also took that and made an SDK, a platform for other dApps to use to secure their keys in their app in a way that’s much more user-friendly for their end users.

CV: What would you say is the biggest selling point for people who may want to use Edge, aside from other wallets?

PP: The main differentiator… There are main differentiators and subtle differentiators. “What do we do that no one does?” versus, “What do we do that’s just subtly better than others?”

So, the main differentiator is what is the heart of Edge, which is the way we manage private keys. We make sure that a user’s private keys are always encrypted on the device, are automatically encrypted. They don’t even have to do any extra steps, and they’re automatically backed up. And that is probably the biggest cause of loss.

People don’t properly back up their keys or they back them up into insecure places. So, keys, in Edge, are automatically encrypted, automatically backed up, automatically synchronized, because we’re a multi-device world.

We want to be able to access our funds and our info from our phone, tablet, desktop and synchronize between the devices, very invisibly two-factored so that we are protected in case someone else gets our credentials. And even password-recoverable, which maybe we’re used to from centralized services like exchanges and forums and social media, but we’re not used to that for encrypted data.

For the most part, encrypted data has been non-recoverable. You lose the password, you cannot get access to it. You just have to guess what it is. So, we built all of that inside Airbitz, and we then turned it into a platform for other dApps to use and we now use that platform inside Edge wallet.

CV: Here’s the thing that I want to ask you, because as a person with cybersecurity experience, I have seen a lot of wallets implement things where they have this beautiful encryption… Wonderful encryption… And their keys are in a [vulnerable] SQL database or something. How does Edge wallet manage its keys?

PP: Keys, first of all, are encrypted client-side. For those of you out there listening to this and are somewhat technical, client-side encryption requires creating a key that encrypts the data. That key is a very strong hash of the username and password of the user.

And that hash is dynamically tuned to the strength of the device. So, as we know, there’s potential to brute-force human-generated keys. And that brute-force capability increases over time as computers get faster.

But as computers get faster, so do our devices. And so, as our devices get faster, we’re able to create passwords that are hashed with stronger hashing power and encryption and we use one that’s not just CPU-hard but also memory-hard. We use scrypt with parameters that far exceed what you would use for an scrypt miner.

So, the beauty of scrypt is that it takes not just the CPU but a tremendous amount of memory to be able to brute-force. We start with that.

Second, we then back up the keys. Now, that is backed up into a server. Admittedly, yes, an SQL database. One of the advantages, though, of what we do is that this database is not encrypted with one key. It’s encrypted with hundreds of thousands, or millions, of keys. Each is unique to the individual account.

So, each account that’s created is its own set of keys for encryption which is what the user generates.

And third is the fact that each account doesn’t just encrypt the private keys. It encrypts everything, meaning that there is no association between an end user and a blob of encrypted data in our database, meaning that if you wanted to attack the database, you’d have to go through every single line and try to brute-force an account.

And even if you’re able to brute-force one, you don’t even know if it has any money in it. You don’t have public addresses that are visible inside of our system. So, while nothing is 100-percent secure, really it’s about the risk-reward ratio, or how much effort to the reward you would get as an attacker.

In our system, that ratio is very much in the benefit of the user, not the attacker. There’s a tremendous amount of energy that they have to spend to brute-force while at the same time not a lot of reward to be gained.

CV: Earlier in a conversation that we had, if you don’t mind us talking about it, you were talking about hardware wallets and that you’re not a big fan of them. Can you re-iterate some of the things that you’ve said during that time?

PP: Yeah, correct. Hardware wallets. I think hardware wallets have a purpose in an industry. If you’re an enterprise and you’re securing funds for, potentially, hundreds of thousands if not millions of users, and you have multiple people in the company—a lot of funds at risk—then there’s a purpose for hardware wallets.

Because, for the most part, you have an IT department and people whose job it is to deal with the complexities of a hardware wallet. When you talk about mass adoption and end users who are just trying to use, store, or hold the money, they’re not accustomed to this.

They’re not accustomed to the concept of securing a private key. One of the challenges with hardware wallets is that it solves a problem that doesn’t exist and it creates a problem that does. The problem that does exist is that people don’t know how to manage keys and they lose them or they put them in an insecure place.

I agree, hardware wallets themselves—the actual electronics of that hardware wallet—are incredibly secure. What I fundamentally disagree with is that the entire holistic system of a hardware wallet is secure because there’s a major hole, which is that back up.

CV: The human.

PP: It’s the human! It’s these 24 words that you just throw at a human and say, “Go deal with it.” And what they do with that is the massive hole in security, and when I say “security,” I mean everything that could cause a user to lose money.

That’s not just someone potentially finding that backup but also the user just potentially losing it. Or, in some cases, even the backup deteriorating because they don’t put it in a good place. Maybe they put it on a type of paper that rubs off very easily.

As a matter of fact, I was just talking to a guy yesterday at one of the parties who said that he was the one that notified—I won’t mention which, but it was either Trezor or Ledger—of a flaw in the card stock that was sent to the users with the hardware wallet that they recommended that people write down their keys on.

The flaw was that being placed on a specific type of surface would cause the ink to rub off. It’s simple as that, because it was actually a laminated surface that would rub off. And he had discovered this, that hardware wallet company put out a big tweet, notified all their users, that they needed to put the backup somewhere else.

Now, granted, we realize that people, after having that hurdle, might put up backup somewhere else anyway. They might use a different tool. They might use a different thing to write down the key.

But where do you put it? How do you do two of two things that are very hard for a human to do?

If you’ve ever heard of the term, “Out of sight, out of mind,” hardware wallets and most other wallets—even software wallets—are asking you to do two of two things: Keep something out of sight—so no one else attacks it—but keep it in mind so you don’t lose it.

And as humans, that is the polar opposite of what we’re used to. And it’s usually once a device fails that people realize they don’t have the backup. It’s usually when you lose a phone when you realize you don’t have a backup.

Or, when you lose your hardware wallet, you don’t know where the backup is. But for that one, two, three, or four years, you think you have it because you haven’t needed it.

This is the inherent challenge of solutions like hardware wallets, where the enterprises can handle that, they have processes in place and staff training in place and whatnot. But for the average end user, I don’t think it’s a fundamental solution.

CV: Let’s talk a little bit about software wallets and their security. What do you think are the biggest cybersecurity-related challenges that wallet developers face today?

PP: So, some of the challenges are, first, understanding what is even the risk of a software wallet. Like, even determining what you are trying to protect against, as a software wallet.

There’s so much misinformation because the media says one thing, but statistics say another. So, a lot of people are scared of what I call “the ultimate God malware.” The malware that can get on your phone, see all, do all, and enact anything it wants to do on disk, in memory, on the network, all of that.

Realistically, how often have we seen any sort of malware of that caliber on a device? To my understanding, it hasn’t existed and it’s usually found and patched. Usually, that requires a flaw in the operating system.

That’s a downright flaw in the operating system and it’s typically patched before it’s in widespread deployment.

Second, what actually is somewhat common… So, we have seen things such as keyboard loggers and screen grabbers.

Keyboard loggers might get installed by virtue of simply installing a custom keyboard. It’s simple as that. And then screen grabbers, they’re a little bit harder, they require a little bit more access to the system, but those, typically, you might also see on a desktop computer.

So now, the challenge for software developers is actually to build software that tries to protect against those issues.

If you’re worried about the “God malware,” that’s very hard. There are few things you can do against the “God malware.” But let’s actually put our energy towards what people are likely to get.

And so, from the viewpoint of a screen grabber… Should you be putting anything on the screen that is the private key in the first place? Ironically, most software wallets claim to be very secure and use the secure element on the phone and “can’t be breached.”

I won’t name any, but there’s definitely a few out there. They only run on phones that have a secure element. But… They’ll show the key on the screen.

So, now you’re effectively seeing what it takes to steal money. The irony of that, right?!

Granted, one of the things you can do to protect against that as a software developer is don’t show the entire key all at once. Most screen grabbers do not capture video.

If they were to do that, they would have to burn through your phone’s battery. You would notice it pretty quickly and go, “What the hell’s wrong with my phone?”

Imagine capturing and streaming every second of footage on your phone. It would be tremendously resource-intensive. So what do they do? They’re just taking snapshots.

So, by separating when your app shows parts of the words on-screen, you help protect against that. And an even better thing is, “Just don’t show the private key at all,” which is effectively what we do at Edge.

We don’t ever show the private key unless the user absolutely wants to see it. And if they do… That’s the power user. They’re the people that deem, “No, I want to absolutely see my key, and I’m taking responsibility in that sense.”

But we never make it the default. We don’t even make it easy because seeing it is a security risk.

You know, many people don’t see it that way. They’re like, “No, I want to see it because I want to back it up and protect it.” I’m like, “OK, here it is! But realize you’ve now opened up another attack vector.”

CV: A Pandora’s Box…

PP: You’ve opened up a Pandora’s Box. What do you do with it now?

What can software developers do? Number one: Just don’t show the key if you don’t need to. It’s as simple as that.

Now, the second thing—and I’ve seen some software wallets do this—is that if someone has a keylogger, don’t make them type it in. If someone has to type in the key, that effectively could compromise the key as well.

And a keylogger is the most common piece of malware they’re going to get. So, what I see some wallets doing, which is an improvement, is instead having people tap parts of the screen to verify the key as opposed to typing it in.

You know, you have to prove that you backed up the key and you copied the words. They type it in, they put it in order but they don’t type in the words verbatim as they saw it because that would be an easy way to compromise them.

So, I’m seeing some wallets do that, which is a good thing. I always give credit where it’s due even if we don’t necessarily [show the mnemonic phrases] because we have a different model altogether.

I hope that answers your original question. I think it did.

CV: Yes, yes it did.

CV: Do you anticipate any regulatory challenges from the US government in particular as far as wallet development is concerned?

PP: I expect it, but it’s going to be on hosted wallets. Those are the low-hanging fruit. They mirror the banking system. They’re a target because you can take existing regulations--

CV: Sorry to interrupt, but you’re talking about web wallets?

PP: No, not web wallets. Hosted wallets. You can have a wallet on the web that’s not hosted. Who controls the private keys?

So, you can have a web wallet where the user controls the keys. Now, a hosted wallet like Coinbase or Inputs.io—which was a compromised hosted wallet… Those types of solutions are absolutely banks.

One of the most popular wallets many people aren’t aware of is a hosted wallet, and it’s because it has a hosted look and feel, with a regular login interface, but fully 100% hosted. Those are, in every shape and form, banks.

They will be the low-hanging fruit, and governments don’t move unless they need to. Let’s face it. And those are the solutions and providers that are creating the biggest problems for security that people are complaining about. So, that’s where the government moves towards.

I don’t imagine that user-controlled, non-custodial wallets, will have any issues for a very long time. Probably long enough where by then the government won’t really have a whole lot of control to be able to put regulations on non-hosted, non-custodial wallets.

CV: You know, there’s a directive from the European Commission at one point that was saying that wallet providers also have to do KYC. I’m not sure how far it actually got, as far as enforcement is concerned. It got very far with exchanges, but--

PP: Wallets, exactly. So, like I said, the regulations will hit the hosted wallets very hard, such as exchanges, because they are hosted wallets. I think that it’s going to hit the wallets that are not exchanges, that are just simply hosted, for sure.

But for the decentralized, non-custodial wallets, it’ll be very, very hard to hit those providers because so many of them are not even providers; they’re just simply software on open-source GitHub repositories that anybody can compile and build.

Now, who do you hit? That becomes the lingering question and in an example use case, it says, “This isn’t worth regulating because you just squash one thing and it just pops up somewhere else.”

This is one of the reasons why I don’t think that that industry and those parts are going to have the regulation burden that the hosted solutions have.

CV: Do you think that Edge wallet will ever have anything to do with KYC, AML, or the IRS, for example, in the United States?

PP: It will with respect to the partners that we partner with for people to buy and sell and use crypto. Those partners will and already have implemented KYC but that KYC goes directly from the end user of an Edge wallet or an Airbitz wallet—which have buy-sell services integrated—from their phone directly to those services.

When I submit my name, email, phone number, address, social security number, it goes right from my phone directly to that service. Edge and Airbitz never see any of that info because we’re not the ones providing that service. We’re not providing a buy-sell service.

But that will be the closest thing that people will see to our application or our platform doing any KYC because literally our platform is just software.

We’re software and it’s running on your computer and it’s talking to a decentralized network. That’s what it is. It’s not much different than you actually doing that with a full node software that you download and build yourself.

So, yes, there will be an aspect of what people will feel like is KYC. It won’t be us doing it. It’ll be partners doing it and it will KYC your transactions with those partners. It won’t KYC every transaction you do on the network.

CV: Currently, web-based blockchain apps are getting a lot of heat because of the recent breaches with MyEtherWallet where some users suffered tremendous losses due to DNS poisoning, as I call it. How do the mobile app wallet ecosystems fare in comparison?

PP: It really comes down to how much friction there is to modify the app. It comes down just to that.

Web-based applications have incredibly low friction to modify because what you do is you change the code on the server, or you redirect the server somewhere, and suddenly, anyone that was using your application via the web has different code.

Mobile applications can but don’t have that level of friction. They have much more friction. You have to actually upload a different version of the application into an app store and someone has to actually download it or get an update from the app store. There is a delay in that process.

Because of that delay—and “delay” is a very commonly-used concept for security—it imparts a level of security. Knowing that, as a hacker, do I want to attack a web wallet? Or attack a downloadable file where I know that once I attack it, maybe some people will download it before it gets detected and then it gets unwound and I only got a little bit of money?

Like I mentioned as I was on stage, it’s not about security being 100%. It’s about the effort not being worth the reward.

So, there’s far, far less reward in a situation where there is downloadable content for the effort. Now, granted, there is still more friction you can add to a system that is web-based.

As an example—and I also mentioned this on stage—if you’re deploying something that is web-based, something that is important for people to get right, one thing you can do as a security measure is deploy, or have friends and colleagues and other partners or companies deploy, a server that checks your website.

Have four different partner companies or people, or even internal people in your company who don’t have access to each other’s servers, deploy a server that checks your website and makes sure that a hash of the content of that site is exactly the same as what is currently stored on their checking server.

Do that every five seconds, and if anything changes, instantly send massive alerts to everyone, PagerDuty, Slack messages, Twitter, and Facebook messages. Then, you know that not only are you protected but if it’s known in the industry that that’s what you do, that these are your security measures, as an attacker, do you want to attack that website? Or do you want to attack a different one that doesn’t have those security measures?

As I’ve mentioned, don’t be the slowest gazelle because that’s the one that gets eaten. Put the security measures not just to protect you but to prevent the attack in the first place.

CV: You have a privacy coin. You have Zcoin. Why not Zcash or Monero?

PP: Well, the number one reason is Zcoin contributed support for Zcoin into our codebase. That’s probably the biggest reason.

They didn’t contribute the privacy-specific features. I think they’re still in deliberation as to what privacy features they want to implement. But we definitely welcome them to contribute to port for those specific privacy features as they deploy them into Edge.

So, I’d be happy to see that. We’re still looking at Zcash and we’re looking at Monero. We’d like to implement some of those as well because there is a lot of consumer demand for them.

But, to answer your question, the biggest reason is because they contributed the support. And I think, at this point, the support for Zcoin is incredibly similar to Bitcoin, so only a few parameters needed changing.

There were no signature formats, transaction formats, and whatnot, necessary. So, it was easy, just to put it that way.