A Homomorphic Encryption Illustrated Primer


Homomorphic encryption schemes provide an amazing capability - to be able to perform calculations on data without knowing what the data is. This lets you work out the answer to questions while keeping potentially sensitive source data private.

One of the most promising approaches (and the subject of some recent standardisation efforts), is known as the Fan-Vercauteren (FV) scheme, (or sometimes the Brakerski-Fan-Vercauteren scheme) and this is what we will illustrate in depth here. There are a number of implementations you can use to try things out:

Library URL License Language
HEANN https://github.com/kimandrik/HEAAN CC non commercial C++
NFLib https://github.com/quarkslab/NFLlib GPLv3 C++
FV-NFLib https://github.com/quarkslab/NFLlib GPLv3 C++
cuHE https://github.com/vernamlab/cuHE MIT CUDA C++
PALISADE https://git.njit.edu/palisade/PALISADE/tree/master Liberal C++
SEAL https://www.microsoft.com/en-us/research/project/simple-encrypted-arithmetic-library/ Microsoft research only C++
HELib https://github.com/shaih/HElib Apache 2.0 C++
jLBC http://gas.dia.unisa.it/projects/jlbc/download.html LGPLv3 Java

These encryption schemes appear complicated and a little mysterious, but hopefully this post will give you a clear idea of how and why they work.

The overall structure of this post includes

This is quite a lot of material, so strap in for the ride.

Some introductory maths

These homomorphic encryption schemes are based on a hard computation problem known as Ring Learning with Errors. Here we will unpack what the Ring part of that title means. Essentially, data in these schemes is represented by polynomials both when it is encrypted (the ciphertext) and when it is unencrypted (the plaintext).

These are almost the normal polynomials that everyone studies at school. Things like

\[4 x^2 + 2 x+1\]

There are some differences, the first being that the coefficients are all whole numbers and are the remainder relative to some other whole number \(t\) (that is \(\bmod t\) ). Say \(t=24\), then this is like a 24 hour clock, where adding 6 hours to 21 takes you to 3. All the coefficients of the polynomials are treated like this.

Alternately, we can consider the numbers to range from -11 to 12, allowing us to negate numbers conveniently. Note that this is just a convenience factor - there is no difference between a "remainder" of -1 and a remainder of 23 (when dividing by 24).

The second, and trickier, thing that is different about these polynomials is that this idea of using remainders applies not just to the coefficients of the polynomials, but also to the polynomials themselves.

We define a special polynomial called the polynomial modulus, and only consider the remainder of polynomials when they have been divided by this polynomial modulus. The specific form of this polynomial modulus in the FV scheme is \(x^d + 1\) where \(d=2^n\) for some \(n\). For illustration here we will take \(n=4\), so the polynomial is \(x^{16}+1\).

Because we are considering remainders with respect to \(x^{16}+1\), we only have to consider polynomials with powers from \(x^0\) to \(x^{15}\). Any larger powers will be reduced by division by the polynomial modulus. This can also be understood by considering that \(x^{16} \equiv -1 \pmod{ x^{16} + 1 }\), meaning that \(x^{16}\) can be replaced by -1 to reduce the larger powers of \(x\) into the range 0 to 15.

So, all the polynomials we are considering are of the form

\[a_{15} x^{15}+a_{14} x^{14}+a_{13} x^{13}+a_{12} x^{12}+a_{11} x^{11}+a_{10} x^{10}+a_9 x^9+a_8 x^8 \] \[+a_7 x^7+a_6 x^6+a_5 x^5+a_4 x^4+a_3 x^3+a_2 x^2+a_1 x+a_0\]

where each of the 16 coefficients \(a_i\) can range from 0 to \(t-1\). We can illustrate this as a torus of coefficients as shown here:

In this figure, each loop represents the 24 possible values of the coefficient of one of the powers of \(x\) that appears in the polynomial. The green dots represent where the 0 value of the coefficient lies. This gives us a nice way to visualise the polynomials, and this will be helpful below when we consider how the encryption and decryption steps work.

The FV encryption scheme involves a lot of multiplications of polynomials. When we multiply two powers of \(x\), say \(2 x^{14}\) and \(x^4\), we add their exponents - making \(2 x^{18}\). One might assume that finding the remainder of this polynomial with respect to the polynomial modulus might involve just rotating the exponent back through 0 at \(x^{16}\), to give \(2 x^2\), like it does for the integer coefficients shown above. This would be the case if the polynomial modulus was just \(x^{16}\). However, our polynomial modulus is \(x^{16}+1\) - as mentioned above, the extra plus one factor introduces a sign change which helps further scramble the result of the multiplication.

As illustrated above, multiplication of a term \(2 x^{14}\) by \(x^4\) modulo \(x^{16}+1\) takes this term (represented by the red dot above), rotates it forward 4 powers around the torus, and then reflects the value across the 0 point, making \(22 x^{2}\) (or \(-2 x^2\) if we consider numbers from -12 to 11 rather than 0 to 23).

Polynomials with this form have a very rich structure and many nice properties. They are a subset of the cyclotomic polynomials. Using one of them as a polynomial modulus is not strictly necessary, however doing so provides a number of conveniences and speedups.

Encryption using polynomial rings

Now that we have introduced some properties of the polynomial rings that are used in the FV encryption scheme, we can talk about how the encryption and decryption works. First up we need to talk about how the private and public keys are generated, and then how they are used for encryption and decryption.

The private and public keys

Encryption takes a plaintext and converts it to a ciphertext using a public key which is derived from a private key. The conversion from plaintext to ciphertext is done in a way that is easily reversible only if you know the private key.

More specifically, the plaintext is a polynomial from the ring with polynomial modulus \(x^d + 1\), with \(d=2^n\) as above, and a coefficient modulus, \(t\). The encryption of a plaintext is a ciphertext which is represented by two polynomials from the ring with the same polynomial modulus, but a coefficient modulus \(q\), which is typically much larger that \(t\).

For example, the polynomial modulus may be \(x^{4096} + 1\), meaning the polynomials in both the plaintext and the ciphertext have \(d=4096\) coefficients. The coefficients of the plaintext polynomial might be calculated modulo \(t=290764801\), and the coefficients of the ciphertext polynomials might be modulo \(q=9214347247561474048\) or larger.

For illustrative purposes, we will use much smaller numbers, but hopefully these will give a better idea of what is going on in the various steps of the scheme. In this first section, to make things very visual, we will use \(d=16\), \(t=7\), and \(q=874\). Note, these parameters are not secure!! (If you want some secure parameters, have a look at Section 8.3 of this version of the SEAL manual which was current at the time of writing.)

For the private or secret key, which we call \(s\), we generate a random polynomial with coefficients of either -1, 0 or 1. For example, \[s=x^{15}-x^{13}-x^{12}-x^{11}-x^9+x^8+x^6-x^4+x^2+x-1\]

Next we generate a public key, using a random polynomial from the ciphertext space, with coefficients modulo \(q\), which we call \(a\). \[a=42 x^{15}-256 x^{14}-393 x^{13}-229 x^{12}+447 x^{11}-369 x^{10}-212 x^9+107 x^8\] \[+52 x^7+70 x^6-138 x^5+322 x^4+186 x^3-282 x^2-60 x+84\]

We also define an error polynomial, which is "small" in the sense that it has small coefficients drawn from a discrete Gaussian distribution. This polynomial is used once here, then discarded.

\[e=-3 x^{15}+x^{14}+x^{13}+7 x^{12}-6 x^{11}-6 x^{10}+x^9+4 x^8\] \[-x^6+3 x^5-4 x^4+4 x^3+4 x+1\]

The public key is then defined as the pair of polynomials, \({\bf pk}=(\left [{-a s + e}\right ]_q , a)\), where the polynomial arithmetic is done modulo the polynomial modulus and the coefficient arithmetic modulo \(q\).

For the example given above, the first polynomial of the public key is constructed as

\[{\bf pk}_0 = -285 x^{15} - 431 x^{14} - 32 x^{13} + 86 x^{12} - 83 x^{11} -142 x^{10} -41 x^9\] \[ + 430 x^8 + 26 x^7 - 158 x^6 - 281 x^5 + 377 x^4 + 110 x^3 - 234 x^2 - 113 x + 252 \]

The first multiplication takes the polynomial \(a\), which has coefficients from the entire range of \(\bmod q\), and multiplies it by \(s\) which has coefficients which are one of -1, 0, or 1. Due to the "rotate and reflect" nature of polynomial multiplication with respect to the polynomial modulus, this effectively mixes and scrambles all of the coefficients of \(a\), and the small error term is further added to that. The polynomial \(a\) effectively masks the secret key in the public key.

Breaking the encryption scheme from the public key basically involves working out \(s\) from the pair \((\left [{-a s + e}\right ]_q , a)\). This motivates the inclusion of the error term in the scheme - if \(e\) is zero, it is easy to work out \(s\) from the key. When \(e\) is sufficiently large, but not too large, this is a hard problem - see Section 6.1 of the original paper for the details.

In the toy case used for illustration here, the secret key could be recovered with a brute force attack - trying every possible \(s\) in \(-a s + e\), (there are only 3^16 = 43,046,721 combinations), and looking for one that gives an answer close to the first term of the public key. For real parameters, this brute force approach is totally infeasible - 3^4096 is a very large number - but there are cleverer methods that then define the security for a given set of parameters.


The encryption process looks a little like the public key generation process.

Encryption takes a plaintext, which is a polynomial with coefficients modulo \(t\), and converts it to a pair of polynomials with coefficients modulo \(q\). As an illustration, we will encrypt a very simple polynomial message - \(m = 3 + 4 x^8 \equiv 3 - 3 x^8\) - with only two non-zero coefficients.

Encryption requires three more small polynomials. Two error polynomials drawn from the same sort of discrete Gaussian distribution used in the error polynomial in the public key, and another polynomial we will call \(u\) which has coefficients drawn from -1, 0 or 1, just like the private key.

\[e_1=-5 x^{15}-2 x^{14}+3 x^{13}-x^{12}-4 x^{11}+3 x^{10}+x^9+4 x^8\] \[+4 x^7+5 x^6-4 x^5-3 x^4-3 x^3+2 x^2-6 x+4\]

\[e_2=-7 x^{15}+2 x^{14}-4 x^{13}+5 x^{11}+2 x^{10}-x^9+4 x^8\] \[-4 x^7-3 x^6+2 x^5-2 x^4+x^3-4 x^2-2 x+2\] and \[u=x^{14}+x^{13}+x^{12}-x^8-x^5-x^3+1\]

These polynomials are used in the encryption process, then discarded.

The ciphertext is represented by two polynomials calculated as \({\bf ct} = (\left [{{\bf pk}_0 u + e_1 + q m / t}\right ]_q , \left [{{\bf pk}_1 u + e_2}\right ]_q)\). Note where the message appears and how it appears - the values that are in the message are in the range of \(\bmod t\), and they are scaled by q/t - 128 in our toy example - making them cover the range of \(\bmod q\). That is the only change to the message on insertion into the ciphertext. These values are masked by being added to the first term which has values from the range of \(\bmod q\) which are indistinguishable from random noise. The random nature of \(u\) changes the mask used in each encryption, thereby ensuring that the same plain text produces very different ciphertexts each time it is encrypted.

Homomoprhic addition and multiplication work because the message is present in the ciphertext only with a scaling. The other terms are there to hide the message, and are provably effective in doing so in a way that can be removed only if you know the secret key.

Evaluating the first element of the ciphertext explicitly using the polynomials above gives

\[{\bf ct}_0=217 x^{15}-53 x^{14}+13 x^{13}-249 x^{12}-392 x^{11}-238 x^{10}+252 x^9+115 x^8\] \[+5 x^7+184 x^6-201 x^5-258 x^4-247 x^3+144 x^2+23 x+42\].

Expanding the public key in the first term of the ciphertext, we see that \({\bf ct}_0 = \left [e_1 + e u - a u s + q m / t\right ]_q \). In this expression, the first two terms are "small", being proportional to the error terms, and the second two terms are "large". The first large term effectively hiding the second term, which is the message.

The second element of the ciphertext is calculated like this:

\[{\bf ct}_1=25 x^{15}+225 x^{14}-12 x^{13}+270 x^{12}+350 x^{11}-24 x^{10}+56 x^9-330 x^8\] \[+386 x^7+225 x^6-332 x^5+68 x^4-20 x^3-26 x^2-91 x+380\]

Expanding the public key in the second element of the ciphertext, we see that \( {\bf ct}_1 = \left [a u + e_2\right ]_q \). This shows how decryption can work - if we know \(s\), we can calculate \({\bf ct}_1 s = \left [a u s + e_2 s\right ]_q \), which can be used to remove the non-message large term in the first element of the ciphertext.

Summarising, the ciphertext can be expressed in terms of the public key, private key, mask, noise and message as


As outlined above, decryption is relatively simple. To begin, we calculate \(\left [{{\bf ct}_0 + {\bf ct}_1 s}\right ]_q\), which removes the mask from the message entirely. This gives us a polynomial that expands to \( \left [q m / t + e_1 + e u + e_2 s \right ]_q \) - that is, the scaled message plus some noise terms. So, as long as the noise terms aren't too big, we can recover the message.

Explicitly, \[{\bf ct}_1 s + {\bf ct}_0=13 x^{15}-2 x^{14}+17 x^{13}+22 x^{12}-32 x^{11}-23 x^{10}+19 x^9-380 x^8\] \[+9 x^7+10 x^6-13 x^5-3 x^4-2 x^3-12 x^2+7 x+393\] Here you can see that all the coefficients apart from the two non-zero coefficients (\(x^8\) and \(x^0\)) of the plaintext are smaller than q/t = 128. If we rescale this polynomial back to the values in the range of \(\bmod t\), then we have \[ \frac{13 x^{15}}{128}-\frac{x^{14}}{64}+\frac{17 x^{13}}{128}+\frac{11 x^{12}}{64}-\frac{x^{11}}{4}-\frac{23 x^{10}}{128}+\frac{19 x^9}{128}-\frac{95 x^8}{32}\] \[+\frac{9 x^7}{128}+\frac{5 x^6}{64}-\frac{13 x^5}{128}-\frac{3 x^4}{128}-\frac{x^3}{64}-\frac{3 x^2}{32}+\frac{7 x}{128}+\frac{393}{128} \]

Rounding these coefficients recovers our message \[m = 3-3 x^8\].

We get our message back by rounding the approximate coefficients after scaling to their nearest integer counterparts:

Putting it all together, we decrypt a ciphertext by calculating \[ m' = \left[ \left\lfloor \frac{t}{q} \left [{{\bf ct}_0 + {\bf ct}_1 s}\right]_q \right\rceil \right]_t \] where \(\left\lfloor \right\rceil \) denotes rounding to the nearest integer.

If the coefficients are too noisy, then they will end up closer to a different integer than the correct integer, and then the decryption will (silently!) fail and produce an incorrect result. In the example above, the largest error was \(13/128\), so there is still some headroom for more noise and still allowing a correct decryption. The amount of headroom for noise can be adjusted by making the ratio \(q/t\) larger or smaller.

Homomorphic Operations

One of the main reasons there is so much interest in these types of cryptographic schemes is that they allow homomorphic additions and multiplications (from the greek homo - same and morphe - shape). This means that you can add up and multiply numbers while they are still encrypted, without having to decrypt them first. This is an amazing capability that promises to set a new gold standard in data protection and security.

Homomorphic addition

The simplest case is the addition of two encrypted numbers. Let us assume we have encrypted two polynomials, \(m_1\) and \(m_2\) with the same public key: \[{\bf a} = (\left [{{\bf pk}_0 u_1 + e_1 + q m_1 / t}\right ]_q , \left [{{\bf pk}_1 u_1 + e_2}\right ]_q),\] \[{\bf b} = (\left [{{\bf pk}_0 u_2 + e_3 + q m_2 / t}\right ]_q , \left [{{\bf pk}_1 u_2 + e_4}\right ]_q).\] Note that we needed to use two different small polynomials \(u_1\) and \(u_2\), as well as 4 small noise polynomials \(e_1 ... e_4\) to do this.

If we just add the corresponding ciphertext elements, we get a new ciphertext \[ {\bf a} + {\bf b} = \left(\left [{{\bf pk}_0 (u_1+u_2) + (e_1 + e_3) + q (m_1 + m_2) / t}\right ]_q , \left [{{\bf pk}_1 (u_1 + u_2) + (e_2 + e_4)}\right ]_q \right) \]

Because the message is present in the ciphertext with only a scaling, the result of the addition is the same form as an encryption of \(m_1 + m_2\) with new noise terms: \[ {\bf c} = \left(\left [{{\bf pk}_0 (u_3) + (e_5) + q (m_1 + m_2) / t}\right ]_q , \left [{{\bf pk}_1 (u_3) + (e_6)}\right ]_q \right) \]

The approximate decryption (before rounding) will be \[ \left [q (m_1 + m_2) / t + e_5 + e u_3 + e_6 s \right ]_q \]

This means that the message \(m_1 + m_2\) will decrypt correctly as long as the new noise terms are not too large. There are three types of noise terms: \[ e_5 = e_1 + e_3 \] \[ e u_3 = e (u_1 + u_2) \] \[ e_6 s = (e_2 + e_4) s \]

What we are worried about is when these terms become large enough that one of the coefficients in the noise polynomial becomes bigger than \(q/(2 t)\), at which point the decryption will fail, as the rounding operation at the end of the decryption process will round to the wrong number.

If we consider just the first term then we are adding up polynomials with coefficients drawn from a discrete Gaussian distribution. This means that in some cases we will be adding a negative coefficient to a positive coefficient, and the result will be closer to zero. In other cases, the coefficients will have the same sign, and so the result will be larger. We can do many homomorphic additions, and it is instructive to see how the error terms scale with the number of additions that are done. The distribution of coefficients is shown below for where we have added 1, 5 and 30 error polynomials (randomised over several hundred trials).

When we've added 30 of these error polynomials, there is a reasonable chance that some of the coefficients will be bigger than 64, which is half of q/t in this case, so decryption will not produce the correct result.

The other two terms represent different cases - the second term is an error polynomial multiplied by the sum of some number of "small polynomials" (coefficients of -1,0, or 1). This multiplication leads to much larger noise. A coefficient of the result of the product of an error polynomial and a small polynomial will be the sum of roughly 2/3rds of the noise polynomial coefficients with random positive or negative signs. This means this noise term scales with the square root of the size of the polynomials, \(\sqrt{n}\).

Drawing the same distributions as above for this term shows that it is much larger than the first term, and that even for our toy problem parameters, there is some danger of getting an incorrect decryption, even after just a couple of additions.

The third term is similar - a sum of a set of error polynomials, multiplied by a "small polynomial". This has an error distribution that looks like this:

Combined, we can plot the growth of the maximum coefficient of the three terms combined, as a function of the number of additions that have occurred. This is shown here as a whisker plot, to give an idea of the variability of these maximum values. (Note the mean of the error is close to zero, this is the distribution of the largest coefficient in magnitude.)

This shows that for the parameters that we have chosen, there is a high probability of errors for decoding a ciphertext that is the result of more that 2 additions, and some chance of 2 additions failing. This is because sometimes the maximum errors are larger than 64, which will cause an incorrect decryption when \(q/t=128\), as it is here. To have more headroom for operations such as this, we need to use a much larger ratio of \(q/t\), one that can cope with the amount of noise typically introduced by the number of operations that are done.

Unfortunately, the amount of noise introduced by the homomorphic multiplication of ciphertexts is much larger again.

Homomorphic multiplication

Homomorphic multiplication is procedurally simple, but comes with more complications than performing additions. As noted above, the message appears in the first element of the ciphertext just with a scaling of \(q m_1/t\). So multiplying the first elements of two ciphertexts and multiplying by \(t/q\) will produce a term with \(q m_1 m_2/t\) - which would be recoverable if we can still remove the masking terms.

The trick, then, to understanding the mechanism for homomorphic multiplication is to see how the masking terms are removed from the product of ciphertexts. To do this, the idea is to look at the ciphertexts as a simple polynomial in the powers of \(s\), the secret key. This is about the 3rd different way of using polynomials in this post, so it is a bit confusing, but it is key to understanding how it works.

We can write the first part of the decryption process such that each element of the ciphertext is a coefficient of a polynomial in \(s\): \[ \left[ {\bf ct}_0 + {\bf ct}_1 s^1 \right]_q \]

Remember that the components of \({\bf ct}\) and \(s\) are polynomials themselves, so this equation is a polynomial times one (\(s^0\)) added to a polynomial times another polynomial, then all that taken modulo \(x^d+1\) and modulo \(q\).

Now, we saw above that decryption produces a quantity that is independent of the mask terms \(a u\). \[ \left[ {\bf ct}_0 + {\bf ct}_1 s^1 \right]_q \rightarrow \frac{q}{t} m + noise\]

Well, consider now two ciphertexts \({\bf a}\) and \({\bf b}\), defined as the encryption of two messages, \(m_1\) and \(m_2\), that can be decrypted so:

\[ \left[ {\bf a}_0 + {\bf a}_1 s^1 \right]_q \rightarrow \frac{q}{t} m_1 + n_1\] \[ \left[ {\bf b}_0 + {\bf b}_1 s^1 \right]_q \rightarrow \frac{q}{t} m_2 + n_2\] where \(n_1\) and \(n_2\) represent the noise in the ciphertexts.

If we take the product of them we have:

\[ \left[ {\bf a}_0 + {\bf a}_1 s^1 \right]_q \left[ {\bf b}_0 + {\bf b}_1 s^1 \right]_q \rightarrow \left( \frac{q}{t} m_1 + n_1 \right) \left( \frac{q}{t} m_2 + n_2 \right) \]

The expression on the right hand side is independent of the masks used in calculating \({\bf a}\) and \({\bf b}\), so the left hand side must be independent of them as well.

If we expand the lefthand side in terms of \(s\) (and for convenience multiply by \(\frac{t}{q}\)) then we have \[ mult({\bf a},{\bf b}) = {\bf c}_0 + {\bf c}_1 s + {\bf c}_2 s^2 \] with \[ {\bf c}_0 = \left[ \frac{t}{q} {\bf a}_0 {\bf b}_0 \right]_q \] \[ {\bf c}_1 = \left[ \frac{t}{q} \left({\bf a}_1 {\bf b}_0 + {\bf a}_0 {\bf b}_1 \right)\right]_q \] \[ {\bf c}_2 = \left[ \frac{t}{q} {\bf a}_1 {\bf b}_1 \right]_q \]

Doing this means that we can work out the components of a new ciphertext that has one more element than the original ciphertexts, and that can be decoded properly using just powers of the secret key \(s\).

The definition of decryption is extended to include this extra term: \[ \left[ \left\lfloor \frac{t}{q} \left[ {\bf ct}_0 s^0 + {\bf ct}_1 s^1 + {\bf ct}_2 s^2 \right]_q \right\rceil \right]_t \]

That just adds another term which is a polynomial times the square of a polynomial. There is lots of bookkeeping to do, but it is just school level algebra (up to the modulus part!). This is a generalisation of the decryption step, and it allows us to decrypt the results of a homomorphic multiplication.

To see how this all works explicitly, consider the expansion of \({\bf a}\) and \({\bf b}\), in terms of the encryption process \[ {\bf a} = \left[{\bf pk}_0 u_1 + e_{11} + q m_1 / t, {\bf pk}_1 u_1 + e_{12} \right] \] \[ {\bf b} = \left[{\bf pk}_0 u_2 + e_{21} + q m_2 / t, {\bf pk}_1 u_2 + e_{22} \right] \]

If we expand the definition of multiplication, along with partial decryption of the result (i.e. decrypt it to the point before we divide by \(q/t\) and round), then the resulting expression is complicated. However, because the components of each ciphertext are structured to be able to remove the masking term (\(a u_i\)) during decryption, the result of this expansion does not depend on the masking term \(a\) from the public key at all!!! The expression obtained follows, which loosely shows the noise terms in the result of multiplication: \[ {\bf c}_0 s^0 + {\bf c}_1 s^1 + {\bf c}_2 s^2 \] \[ = \frac{q}{t} m_1 m_2 + e_{22} m_1 s + e_{12} m_2 s +e m_2 u_1 + e m_1 u_2 +e_{21} m_1+e_{11} m_2 \]

\[ +\frac{t}{q} e^2 u_1 u_2 + \frac{t}{q} e_{12} e_{22} s^2 +\frac{t}{q} e_{22} e s u_1+\frac{t}{q} e_{12} e s u_2 \] \[ +\frac{t}{q} e_{12} e_{21} s +\frac{t}{q} e_{11} e_{22} s +\frac{t}{q} e_{21} e u_1+\frac{t}{q} e_{11} e u_2 +\frac{t}{q}e_{11} e_{21} \]

There are lots of terms here, but now we have removed the masks the question is, how large are the noise terms (everything apart from the first) compared to our "noise budget" of q/(2t)?

To get a feeling for this, we simulated a large number of multiplications of encrypted random messages, for \(d=16\), \(t=7\), and \(q=7168=1024\times t\). The distribution of the coefficient size of each type of error term is shown below. Note that the total error needs to be larger than \(q/(2t) = 512\) to cause a decryption error. Of these terms, the terms involving an error polynomial, the message, and the secret key , \(e_{22} m_1 s + e_{12} m_2 s\), are the largest contributors.

This plot shows that for these parameters, the largest contribution is from the terms which involve an error polynomial multiplied by a message polynomial and the secret key. The largest coefficient of this noise is about 300. There are two of these terms, and the others are smaller. Combining all the error terms into one shows the total noise in the result of the multiplication. The distribution of these coefficients is shown here:

This shows that there is not quite enough headroom to do a single multiplication safely and then decrypt a correct result for these parameters (which are insecure anyway!) - around 1 coefficient in a 4000 will have an error larger than 512, leading to about a 1% error rate in decryption.

So, to summarise, one can do multiplication of ciphertexts if we treat them as polynomials in \(s\) - which thereby cancel out their own masking terms when decrypted - and multiply them as such, keeping track of the coefficients of the powers of \(s\) separately, and keeping track of the amount of noise so we can be confident that they decode correctly.

Relinearisation and other topics

The multiplication strategy outlined above allows us to do many multiplications, at the expense of increasing the size of our ciphertext by one polynomial for each multiplication. The fact that the ciphertext grows in size may be an issue. It turns out there are methods for reducing the ciphertext size back to two polynomials, at the cost of increasing the noise. This is known as relinearisation, as you are removing the quadratic and higher terms in the polynomial in \(s\).

Another important technique that makes using this cryptographic scheme practical is the process of packing multiple messages into one plaintext to increase the throughput through parallelisation.

Relinearisation and these packed encoding mechanisms will be the subject of a subsequent blog post.


Roughly speaking, encryption is a masked embedding of a message in a polynomial ring with some added noise. Each ciphertext contains enough information to remove its own mask given the secret key. Because the embedding just involves a scaling of the message, addition and multiplication can still be performed on them, with some clever constructions to allow the masks to be removed afterwards. The security of the scheme comes from the difficulty in removing the masking in the presence of the noise when you don't know the private key. The difficulty of this problem leads to some excellent security properties, such as there being no known quantum algorithm for attacking these systems.

If you've got this far, we hope you now have a better appreciation of how Ring Learning with Errors homomorphic encryption schemes (or at least the FV version of these) work. Going through the process of writing this has certainly helped us.


Thanks to Wilko Henecka and Hamish Ivey-Law for substantially improving this post with their feedback!