Feedback

Theory vs. Practice

Diagnosis is not the end, but the beginning of practice. Martin H. Fischer


› Who Innovates And At Which Cost?

PQCrypto (an EU Academic/private "multi-million Euro research project" with MasterCard and Intel on the strategic advisory board) has issued initial recommendations presenting their solution called SPHINCS to the threat for today's Public-Key Encryption (PKE), Symmetric-Key Encryption (SKE), and hashing standards:

  • fully-broken: PKE RSA, DH, DSA, ECC (factoring & discrete logarithm solved by Shor's quantum algorithm)
  • half-broken: SKE (halved by Grover's quantum algorithm; AES-256 provides 128 bits of post-quantum security)
  • half-broken: Hashing (halved by Grover's quantum algorithm; SHA-256 provides 128 bits of post-quantum security)

PQCrypto explains that hash-based PKE (SPHINCS) is desirable because the number-theoretical security assumptions of other PKE schemes are less well-understood (each PKE family's key sizes are available here):

  • Code-based systems: mainly encryption (McEliece) (some broken, others not)
  • Lattice-based systems: encryption (NTRU) and signatures (some broken, others not)
  • Multivariate Quadratic systems: encryption and signatures (some broken, others not)
  • Hash-based systems: signatures (SPHINCS), a one-time public-key is a subset of a one-time secret-key
  • Quasi-Cyclic Moderate-Density Parity-Check based McEliece (QC-MDPC) systems: "not well-studied"
  • Isogeny-based systems: "not well-studied"

Finally, PQCrypto says its logo is a turtle because "Post-Quantum security is much more complicated and therefore much slower".
The only sane way to contradict someone is to do better.


What is a "number-theoretical security assumption"?


It's the belief that a calculation cannot be reversed (ie: 2 * 3 = 6 is easy to do while 6 / 3 = 2 is supposedly impossible to find). Obviously, the actual calculations are more complex, but that's the security principle behind today's security standards.

If you feel a bit uncomfortable about such an assumption, well, the facts are on your side: as seen above (with RSA, DH, DSA, ECC) every single "number-theoretical security assumption" used so far in cryptographic standards has been proven wrong.

Did this industry learn from its mistakes?

We are told that AES-256/SHA256 are still safe because "they provide 128 bits of quantum security". This statement implies that you still have to break a 128-bit key-space. Is it an established truth or yet another hazardous bet?

Grover's quantum algorithm inverts functions faster than classical algorithms because it requires n / d  queries (n:key-space, d:duplicate solutions) instead of n queries. For AES/SHA Grover's divides key/output length by a factor two. For SHA it also finds solutions increasingly fast as the hash input becomes larger than its output (collisions are more likely).

BHT combines the birthday effect with Grover's to achieve a further reduction to (n / d) / 3 . As a result, AES/SHA should provide a secure output of (at least) 384 classical bits (rather than 256 classical bits) to achieve a mere 128 bits of quantum security.

Even worse: a well-known range of classical methods running on classical CPUs, despite their exponential worst-case runtime, exploit subtle domain-specific structures: AES, MD5 and SHA X. Wang et al., "Collisions for Hash Functions MD4, MD5,
HAVAL-128, and RIPEMD" Cryptology ePrint Archive,
report no. 199, Aug. 2004, pp. 1–4

E. Biham and R. Chen, "Near-Collisions of SHA-0" Proc.
Int'l Cryptology Conf. (CRYPTO), LNCS 3152, Springer,
2004, pp. 290–305
algebraic structures have been exploited for a long time.

These classical attacks are in use today: they don't require quantum computers, and are much faster than Grover's.

To avoid both kinds of attacks, Defense Contractors don't publish their algorithms. Hopeless on computers, smartphones and tablets (penetrated through hardware and software: OS, firmware, apps.), this tactic may only be relevant for reasonably-safe algorithms implemented in well-protected ASIC chips.

Crypto based on complexity is too fragile. Based on uncertainty it would no longer have to rely on shady security assumptions.


SPHINCS: stateless, practical, hash-based, incredibly nice cryptographic signatures


"Stateless"? Yes. "Practical"? Users will decide. "Hash-Based"? Not perfect but reasonably good. "Incredibly nice cryptographic signatures"? To say the least, the PQCrypto researchers are proud of their work.

This is the "practical and incredibly nice" SPHINCS-256:

Security: 2128 post-quantum
Public Key: 1,056 bytes
Secret Key: 1,088 bytes
Signature: 41,000 bytes

With 200 signatures per second on 4-Core 3GHz Intel Haswell CPUs, SPHINCS signing performance is at the bottom of the SUPERCOP benchmark (for performance and signature size), despite a non-portable parallelized implementation relying on recent Intel CPUs featuring 256-bit AVX2 vector instructions (available on less than 0.00001% of all Internet clients).

PQCrypto recognizes that other faster signing and asymmetric encryption-capable algorithms are needed for secure protocols (SSL, etc.) and for legacy / low-powered CPUs:

PQCrypto wrote: "busy Internet servers are subject to tremendous performance pressures: often a single server is required to handle tens of thousands of clients every second."

This is true. For example, TWD's application server G-WAN created in 2009 for Global-WAN handles more than 800,000 Web client requests per second on a 7-year old Apple MacPro. If SPHINCS was a mainstream standard, it would divide G-WAN's performance by – more than – a factor 4,000. How many datacenters would survive such a tax?

PQcrypto added: "More than 98% of all microprocessors sold worldwide are used in embedded devices, and this trend will most likely accelerate due to emerging applications in the Internet of Things (IoT)".

So, SPHINCS is not for high-performance servers. But how fast is it on devices that do not require concurrency such as car / bus / plane infotainment systems, smartcards, or even RFID stickers? (RFID tags ring a bell if you exit a shop and forgot to pay)

Having implemented SPHINCS-256 on ARM Cortex M3 32-bit CPUs designed for embedded systems PQCrypto wrote: "The 589,018,151 cycles used by SPHINCS may be acceptable for non-interactive applications [...] but we believe that further algorithmic improvements to stateless hash-based signatures will be needed to enable deployment on a broader scale."

From these results, we can calculate human-friendly execution times Converting CPU cycles into seconds

72 MHz = 72 millions of CPU clock ticks per second.
As a result:

589,018,151 cycles / 72,000,000 = 8.18 seconds

The faster the CPU clock, the more it does things.
That's why a higher CPU clock frequency is desirable
and why it was a drama when its rise stopped in 2001.

Since then CPUs don't run quicker but they do more
in less time thanks to more advanced instruction sets
and parallelism: CPUs embed several 'Cores' (2/4/6...
instances of the same CPU) that are synchronized to
work together.
: depending on the ARM Cortex M3 clock speed (72-180 MHz), one single SPHINCS signature takes between 3.2 and 8.2 seconds.

So SPHINCS is not for embedded clients either: connected cars, smart-TVs, residential modems and routers, smartphones, tabletsalmost all consumer / industrial / mobile devices – to which the looming IoT should be added.

And this low-consumption ARM CPU is several orders of magnitude faster than 8-bit microcontrollers.

That's a pity because hash-based signatures don't need the 8-bit coprocessor required by RSA / ECDSA. A fast algorithm would generate big saving in smartcards: AMEX / VISA / MasterCard credit/debit cards, Telco SIM cards or e-health cards. And RFID chips would also cost far less in e-passports / ID-cards, logistics / anti-theft / copy-protection tags (i.e.: printer cartridges).

So, either a much faster hash-based algorithm will surface, or the world will have to rely (again) on hazardous number-theoretical assumptions, the kind that has made today's asymmetric encryption qualify for a full-replacement. Factorization and solving discrete logarithm improve continually – even on classical computers Are Quantum Computers (QC) the threat?

IBM says that QCs are a 15-year threat to
classical cryptography – but unpublished
factoring capabilities on classical computers
would be as much a threat
.

These may have existed a long time before
QCs which, like efficient factoring capabilities,
would have been immediately classified.
– an area now dominated by Chinese supercomputers and cheap FPGA arrays exploiting parallelisation.

–  Is PQCrypto trying hard to offer more reliable security than others? (Microsoft and Google prefer number-theoretical security)
–  Or is SPHINCS demonstrating that no such a thing like serious security can be made available?

Now let's consider an alternative in use for... 9 years.


TrustLeap: stateless, practical, SKE-based, zero-hype cryptographic signatures


Digital signatures based on Symmetric-Key Encryption (SKE) have significant advantages:

  1. lower computational costs (much faster and scalable software and hardware implementations)
  2. design relies on a well-understood theory applied for more than half a century (no shady assumptions)
  3. at least as safe as hashing (unlike hashing, symmetric-key encryption can be made provably unbreakable)
  4. work in-place instead of requiring memory allocation (handy on microcontrollers, critically important on ASICs)
  5. rely on a lean, flexible, general-purpose PKE algorithm also suitable for key-exchanges and asymmetric encryption.

SKE-based PKE schemes are much safer and easier to design than SPHINCS but PQCrypto does not even list them as candidates.

In November 2017 (one year and half after this blog post was written) MICROSOFT and several universities (including Princeton) published a paper titled "Post-Quantum Zero-Knowledge and Signatures from Symmetric-Key Primitives" presenting their work called "picnic":
"We propose a new class of post-quantum digital signature schemes that: (a) derive their security entirely from the security of symmetric-key primitives, believed to be quantum-secure, and (b) have extremely small keypairs, and, (c) are highly parameterizable."


TWD, TrustLeap, Global-WAN are not cited as previous work despite it having been found and validated in year 2007 – without any funding.
Could it be because TWD's technology is no much faster and safer? ("picnic" is relying on multiplications and pseudo-randomness)

TrustLeap (TWD's security division) was founded in 2007 to market TWD's post-quantum cryptography (TWD was founded in 1998). This organically-funded private initiative gave birth to Global-WAN in 2010, a lightweight Level-2 distributed VPN (protected by post-quantum PKE and post-quantum SKE so key sizes did not have to be enlarged, letting end-users re-use any existing key generator/repository).

Nine years after its design, Global-WAN's post-quantum security is still ahead of academic research. TrustLeap PKE is suitable for asymmetric encryption, key exchanges, and digital signatures:

Security: 2128 post-quantum   (same as SPHINCS-256)
Public Key: 32 bytes   (33x better than SPHINCS-256)
Secret Key: 32 bytes   (34x better than SPHINCS-256)
Signature: 64 bytes   (427x better than SPHINCS-256)
or         Security: 2256 post-quantum   (2x better than SPHINCS-256)
Public Key: 64 bytes   (16x better than SPHINCS-256)
Secret Key: 64 bytes   (17x better than SPHINCS-256)
Signature: 128 bytes   (213x better than SPHINCS-256)

SPHINCS-256's portable C implementation offers around 20 signatures/second on recent 4-Core 3GHz Intel Haswell CPUs while TrustLeap PKE's portable C implementation delivers 3.6 million signatures/second (without allocating memory, and sub-linear performance penalty for larger keys, hence an unrestricted audience and much smaller, cheaper, power-saving FPGAs / ASICs).

Unsuitable for asymmetric encryption and key-exchanges, SPHINCS is 180,000 times slower than TrustLeap PKE.

MICROSOFT's and Princeton's "picnic" delivers 32,752-byte signatures at the rate of 56 signatures per second (for SUPERCOP's entry test of 59-byte message; signatures can reach 209,474 bytes for larger messages – thanks to a bulky and unsafe design based on complexity instead of true security).

TrustLeap PKE is even faster and produces smaller ciphers and keys than RSA-512 (SUPERCOP's name "ronald512") broken in 1999. So, TrustLeap secures and accelerates protocol handshakes, secure storage, and payments in today's infrastructure (Web, email, VPNs, remote administration, video surveillance, OS / firmware updates, smartcards, RFID chips, blockchains...

In today's cryptography, factoring and discrete logarithms have been resolved faster than by trying all the possibilities one-by-one.
This is why these PKE algorithms (RSA, DH, ECC, etc.) previously presented as "provably-safe" now have to be replaced.

In contrast, in TrustLeap PKE's case, as long as the (freely interchangeable) hash function is not broken, the only way to break the signature is a brute-force attack on the whole keyspace (2128, 2256, or more depending on Hash length & signatures

All signature schemes should be
expendable (or applied repeatedly) to
adequately protect static documents
or network packets despite their size
(potentially too small or too large):

Signing a tiny payload may expose
one's long-term secret key.

And signing a large document with
one single fixed-size hash makes the
purpose of a signature weaker.

As adequate protection may require
more work for processors, very fast
signing algorithms are desirable.
the signed document's length).

Further, TrustLeap post-quantum PKE and SKE let applications rely on 128-bit keys and ultra-light algorithms. This leverages legacy systems and fosters the deployment of low-powered chips like RFID tags (contact-less components that harvest their energy from environmental electric fields with a tiny antenna).

Nine years after TWD resolved the problem, PQCrypto (a public/private project) is subsided to work on it: "Apart from asymmetric cryptosystems, post-quantum cryptography on embedded devices also needs a redesign of symmetric algorithms. Most of today's lightweight crypto aims at an 80-bit or at most 128-bit security level. Those algorithms are broken by Grover's algorithm".

With a tiny and fast signature algorithm, security can be raised to adequate levels by signing several chunks of a documents that is too large to be safely hashed one single time (or by using hash functions which output length can be safely correlated to their input length).


The larger the hashing function's input is as compared to its output, the more information is lost, collisions are more likely, and security declines up to the point of defeating their purpose.

If boring backup programs bother to checksum file chunks (rather than whole files) that may be for a reason (delta-copy is a more recent motivation than... data integrity, which has always been desirable).


In a world where digital signatures are legally-binding contracts(!), provide stealth remote access to servers / desktops / laptops / ultrabooks / tablets / POS / ATMs / embedded systems (even when powered-down), wirelessly track passports and credit cards (in airports, train stations, highways, shopping-malls), control how money is exchanged... appropriate levels of security will protect legitimate users:

If your car (all 5-year or newer cars are connected) embraced a tree at full-speed, your bank-account is siphoned, your emails hijacked, your votes falsified, or your company contracts and sales redirected to your competitors, well, as a consolation, you know why and how it happens.

Even policy-makers such as the Center for Strategic and International Studies (CSIS), an US think tank, have noticed how detrimental to the critical infrastructure security (and economic development) such short-view tactics have been:

"The best outcome would a multilateral agreement that let people secure their data with the strongest possible encryption, using products that allow for the recovery of plaintext by national authorities".

"The key to legitimacy is that citizens will accept actions from their own governments that they will not accept from other governments (particularly the U.S.)."

Ditching backdoored standards and deploying real security is certainly an improvement as compared to today's situation, depicted as a "market failure in cyber-security" by The Economist.

Now we have a significant and well-documented real-world example to discuss, it's time to go back to the subject of this post.


Who Innovates And At Which Cost?


Which process has led to TrustLeap's technology? Instead of trying to add something on the top of a multi-layered construction done by many different persons in a period of time exceeding 30 years, TWD has resolved the problem from scratch in a few months.

Either you re-use ad nauseam what has already been done and get the same results as before... or you feel the need to work until you find a better way. This is about who feels the need to leave one's comfort zone, and who doesn't.

As a result, most academic papers are of poor scientific value, if any (the goal for academics is to "publish to exist" to enhance their reputation and get more grants). Yet, each paper costs (at least) USD 400,000 to the taxpayer.

And this cost estimation only takes into account the salary of two B-school researchers. Increasingly, academic papers count more and more researchers to "rub the back" of colleagues that will also invite you as a guest in their own publications, artificially raising the so-desirable "academic exposure" of the involved friendly colleagues.

It also ignores other costs like buildings, office space, equipments and licenses, electricity, documentation, subscriptions, and travel expenses associated to the academic function. Add the peripheral mandatory services (assistants, accounting, restaurants and cafeterias, parkings, retirement, etc.) and the real cost per academic paper is largely superior to USD 1 million.

I would have loved being paid that much for every documentation I wrote about each of the projects that I have delivered in my career of Engineer, R&D Director, and CEO.

And it would have made economic sense because each of these projects were actually carrying enough innovation and value at their time for many people to pay me and buy the delivered commercial products – a value that almost all Academic publications lack.

TrustLeap was necessarily driven by efficiency as TWD, founded in 1998 (like Google), could not build datacenters (unlike Google and most universities).

As most State-independent businesses are not financed, they take risks. They can't buy capacity, advertising, lobbyists, judicial leniency, tax breaks, complacent public contracts and subsidies, nor even their competitors. In contrast Apple, Amazon, Google, IBM, Microsoft, and Facebook (the "leaders") just have to issue new shares (immediately bought by most central banks that enjoy common shareholders, board members, and advisors).

Take the Bayer-Monsanto merger. Some of the world's most powerful institutional shareholders (Blackrock, Vanguard, Deutsche Asset Management) hold shares in Bayer and Monsanto – but also in many of their rivals. These hidden links increase the concentration of power and lessen competition.

As a result, the "market leaders" – but no one else – can manufacture and promote any product they wish (even at a loss, a tactic often used to defeat less funded competitors). Failure is not an issue as any loss is immediately compensated by new investments. Universities and Governments are financed by the same people, hence the generalized presence of "symbiotic" public/private partnerships.

If you don't have access to capital then you can be an employee, a consultant, or create an independent company that makes intellectual property (books, music, software). For the latter, the cost to entry is low: one computer is enough. This may work providing that the "leaders" (where the money is) are eager to hire you, to buy your stuff, or to let the public buy it. This is a favor that they don't make to everyone: to stay the "leaders", oxygen must not be shared outside of the privileged group.

It's easy to be first when you can make sure that there's no competition.

But recurring financing is not the only difference between high-fliers and the rest of us: an unlimited comfort-zone has its downsides, among which guaranteed increasing mediocrity (due to the lack of challenge) is not the least.

Achievements require experience, cleverness and persistence: the ability to accept more pain than others and not give-up until a decent solution is found. Some scorn it as "misplaced pride" but it's more like a duty to fully-use what we are (the sole dignity available to humans) whatever the challenge (surviving in a twisted environment, maintaining brains and body alive, or raising kids).

This essay is an attempt to explain (a) how people without a dedicated academic background (here in cryptography) and no external funding nor any public reputation built by complacent (and often generously appointed) third-parties can consistently outperform in various matters the publicly recognized experts of their field, and (b) why it may matter if we want to improve as human-beings. It's not about morality: it's about how to perform as a sustainable society.

Educating people about fundamental values should start at school, the beginning of people's life:

Human beings are notoriously bad at logic and calculations, especially when they are detached from any real-world application. Students believe it's the sole path to success as they don't have the choice. This trap limits them for the rest of their life. That might prove to be the wrong selection criteria. Learning by heart many things and regurgitating them in the expected order is merely proof of one's conformance capabilities, required to accept uncreative fulltime tasks. Academia aside, demonstrating this capability may be less relevant than the rewarding joy of identifying and playing with the intrinsic principle of things.

Despite abundant financing, academia might not be the best environment for R&D. Scientists only exist if they are published – and there's a significant amount of politics transpiring from academic papers where rubbing the back of colleagues (and getting substantial private grants financed by public money) seems far more important than advancing the state of Science.

If the General de Gaulle is quoted for having said during a visit to the CNRS (the French National Institute of Scientific Research):
"We find as many Researchers as we want, but we are still searching Finders!", then that may be for a reason.
Finders and Researchers don't play the same game.

Why and how Finders find what Researchers merely search?

  • Finders don't have the choice: delivering innovation that actually works is their only possible way to make a living.
  • Researchers on another hand are paid to search rather than to find. For them, finding means the end of research!

In a finder's head, the process is invariably the same: taking on a problem from various angles until a good fit is identified (it often surfaces after a few days working on the subject when nightsleep ends). What was nebulous is then crystal-clear and the next problem can be approached with a fresh, relaxed mind. It has nothing to do with conscious logic thoughts: "interior but coming out-of-nowhere ready-to-use insights" are conveniently delivered at the door of consciousness when it re-engages with day-time.

This "source of innovation" is not the outcome of a bulky-by-design multi-layered construction spanning on decades (academia at work). It's neither the outcome of sustained capital injection but rather the result of a passionate work that lets inspiration materialize in true innovations.