Mercurial > dropbear
annotate tommath.src @ 203:e109027b9edf libtommath LTM_DB_0.46 LTM_DB_0.47
Don't remove ~ files on make clean (and find -type was wrong anyway)
author | Matt Johnston <matt@ucc.asn.au> |
---|---|
date | Wed, 11 May 2005 16:27:28 +0000 |
parents | d8254fc979e9 |
children |
rev | line source |
---|---|
19 | 1 \documentclass[b5paper]{book} |
2 \usepackage{hyperref} | |
3 \usepackage{makeidx} | |
4 \usepackage{amssymb} | |
5 \usepackage{color} | |
6 \usepackage{alltt} | |
7 \usepackage{graphicx} | |
8 \usepackage{layout} | |
9 \def\union{\cup} | |
10 \def\intersect{\cap} | |
11 \def\getsrandom{\stackrel{\rm R}{\gets}} | |
12 \def\cross{\times} | |
13 \def\cat{\hspace{0.5em} \| \hspace{0.5em}} | |
14 \def\catn{$\|$} | |
15 \def\divides{\hspace{0.3em} | \hspace{0.3em}} | |
16 \def\nequiv{\not\equiv} | |
17 \def\approx{\raisebox{0.2ex}{\mbox{\small $\sim$}}} | |
18 \def\lcm{{\rm lcm}} | |
19 \def\gcd{{\rm gcd}} | |
20 \def\log{{\rm log}} | |
21 \def\ord{{\rm ord}} | |
22 \def\abs{{\mathit abs}} | |
23 \def\rep{{\mathit rep}} | |
24 \def\mod{{\mathit\ mod\ }} | |
25 \renewcommand{\pmod}[1]{\ ({\rm mod\ }{#1})} | |
26 \newcommand{\floor}[1]{\left\lfloor{#1}\right\rfloor} | |
27 \newcommand{\ceil}[1]{\left\lceil{#1}\right\rceil} | |
28 \def\Or{{\rm\ or\ }} | |
29 \def\And{{\rm\ and\ }} | |
30 \def\iff{\hspace{1em}\Longleftrightarrow\hspace{1em}} | |
31 \def\implies{\Rightarrow} | |
32 \def\undefined{{\rm ``undefined"}} | |
33 \def\Proof{\vspace{1ex}\noindent {\bf Proof:}\hspace{1em}} | |
34 \let\oldphi\phi | |
35 \def\phi{\varphi} | |
36 \def\Pr{{\rm Pr}} | |
37 \newcommand{\str}[1]{{\mathbf{#1}}} | |
38 \def\F{{\mathbb F}} | |
39 \def\N{{\mathbb N}} | |
40 \def\Z{{\mathbb Z}} | |
41 \def\R{{\mathbb R}} | |
42 \def\C{{\mathbb C}} | |
43 \def\Q{{\mathbb Q}} | |
44 \definecolor{DGray}{gray}{0.5} | |
45 \newcommand{\emailaddr}[1]{\mbox{$<${#1}$>$}} | |
46 \def\twiddle{\raisebox{0.3ex}{\mbox{\tiny $\sim$}}} | |
47 \def\gap{\vspace{0.5ex}} | |
48 \makeindex | |
49 \begin{document} | |
50 \frontmatter | |
51 \pagestyle{empty} | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
52 \title{Multi--Precision Math} |
19 | 53 \author{\mbox{ |
54 %\begin{small} | |
55 \begin{tabular}{c} | |
56 Tom St Denis \\ | |
57 Algonquin College \\ | |
58 \\ | |
59 Mads Rasmussen \\ | |
60 Open Communications Security \\ | |
61 \\ | |
62 Greg Rose \\ | |
63 QUALCOMM Australia \\ | |
64 \end{tabular} | |
65 %\end{small} | |
66 } | |
67 } | |
68 \maketitle | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
69 This text has been placed in the public domain. This text corresponds to the v0.35 release of the |
19 | 70 LibTomMath project. |
71 | |
72 \begin{alltt} | |
73 Tom St Denis | |
74 111 Banning Rd | |
75 Ottawa, Ontario | |
76 K2L 1C3 | |
77 Canada | |
78 | |
79 Phone: 1-613-836-3160 | |
80 Email: [email protected] | |
81 \end{alltt} | |
82 | |
83 This text is formatted to the international B5 paper size of 176mm wide by 250mm tall using the \LaTeX{} | |
84 {\em book} macro package and the Perl {\em booker} package. | |
85 | |
86 \tableofcontents | |
87 \listoffigures | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
88 \chapter*{Prefaces} |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
89 When I tell people about my LibTom projects and that I release them as public domain they are often puzzled. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
90 They ask why I did it and especially why I continue to work on them for free. The best I can explain it is ``Because I can.'' |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
91 Which seems odd and perhaps too terse for adult conversation. I often qualify it with ``I am able, I am willing.'' which |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
92 perhaps explains it better. I am the first to admit there is not anything that special with what I have done. Perhaps |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
93 others can see that too and then we would have a society to be proud of. My LibTom projects are what I am doing to give |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
94 back to society in the form of tools and knowledge that can help others in their endeavours. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
95 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
96 I started writing this book because it was the most logical task to further my goal of open academia. The LibTomMath source |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
97 code itself was written to be easy to follow and learn from. There are times, however, where pure C source code does not |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
98 explain the algorithms properly. Hence this book. The book literally starts with the foundation of the library and works |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
99 itself outwards to the more complicated algorithms. The use of both pseudo--code and verbatim source code provides a duality |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
100 of ``theory'' and ``practice'' that the computer science students of the world shall appreciate. I never deviate too far |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
101 from relatively straightforward algebra and I hope that this book can be a valuable learning asset. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
102 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
103 This book and indeed much of the LibTom projects would not exist in their current form if it was not for a plethora |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
104 of kind people donating their time, resources and kind words to help support my work. Writing a text of significant |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
105 length (along with the source code) is a tiresome and lengthy process. Currently the LibTom project is four years old, |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
106 comprises of literally thousands of users and over 100,000 lines of source code, TeX and other material. People like Mads and Greg |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
107 were there at the beginning to encourage me to work well. It is amazing how timely validation from others can boost morale to |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
108 continue the project. Definitely my parents were there for me by providing room and board during the many months of work in 2003. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
109 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
110 To my many friends whom I have met through the years I thank you for the good times and the words of encouragement. I hope I |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
111 honour your kind gestures with this project. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
112 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
113 Open Source. Open Academia. Open Minds. |
19 | 114 |
115 \begin{flushright} Tom St Denis \end{flushright} | |
116 | |
117 \newpage | |
118 I found the opportunity to work with Tom appealing for several reasons, not only could I broaden my own horizons, but also | |
119 contribute to educate others facing the problem of having to handle big number mathematical calculations. | |
120 | |
121 This book is Tom's child and he has been caring and fostering the project ever since the beginning with a clear mind of | |
122 how he wanted the project to turn out. I have helped by proofreading the text and we have had several discussions about | |
123 the layout and language used. | |
124 | |
125 I hold a masters degree in cryptography from the University of Southern Denmark and have always been interested in the | |
126 practical aspects of cryptography. | |
127 | |
128 Having worked in the security consultancy business for several years in S\~{a}o Paulo, Brazil, I have been in touch with a | |
129 great deal of work in which multiple precision mathematics was needed. Understanding the possibilities for speeding up | |
130 multiple precision calculations is often very important since we deal with outdated machine architecture where modular | |
131 reductions, for example, become painfully slow. | |
132 | |
133 This text is for people who stop and wonder when first examining algorithms such as RSA for the first time and asks | |
134 themselves, ``You tell me this is only secure for large numbers, fine; but how do you implement these numbers?'' | |
135 | |
136 \begin{flushright} | |
137 Mads Rasmussen | |
138 | |
139 S\~{a}o Paulo - SP | |
140 | |
141 Brazil | |
142 \end{flushright} | |
143 | |
144 \newpage | |
145 It's all because I broke my leg. That just happened to be at about the same time that Tom asked for someone to review the section of the book about | |
146 Karatsuba multiplication. I was laid up, alone and immobile, and thought ``Why not?'' I vaguely knew what Karatsuba multiplication was, but not | |
147 really, so I thought I could help, learn, and stop myself from watching daytime cable TV, all at once. | |
148 | |
149 At the time of writing this, I've still not met Tom or Mads in meatspace. I've been following Tom's progress since his first splash on the | |
150 sci.crypt Usenet news group. I watched him go from a clueless newbie, to the cryptographic equivalent of a reformed smoker, to a real | |
151 contributor to the field, over a period of about two years. I've been impressed with his obvious intelligence, and astounded by his productivity. | |
152 Of course, he's young enough to be my own child, so he doesn't have my problems with staying awake. | |
153 | |
154 When I reviewed that single section of the book, in its very earliest form, I was very pleasantly surprised. So I decided to collaborate more fully, | |
155 and at least review all of it, and perhaps write some bits too. There's still a long way to go with it, and I have watched a number of close | |
156 friends go through the mill of publication, so I think that the way to go is longer than Tom thinks it is. Nevertheless, it's a good effort, | |
157 and I'm pleased to be involved with it. | |
158 | |
159 \begin{flushright} | |
160 Greg Rose, Sydney, Australia, June 2003. | |
161 \end{flushright} | |
162 | |
163 \mainmatter | |
164 \pagestyle{headings} | |
165 \chapter{Introduction} | |
166 \section{Multiple Precision Arithmetic} | |
167 | |
168 \subsection{What is Multiple Precision Arithmetic?} | |
169 When we think of long-hand arithmetic such as addition or multiplication we rarely consider the fact that we instinctively | |
170 raise or lower the precision of the numbers we are dealing with. For example, in decimal we almost immediate can | |
171 reason that $7$ times $6$ is $42$. However, $42$ has two digits of precision as opposed to one digit we started with. | |
172 Further multiplications of say $3$ result in a larger precision result $126$. In these few examples we have multiple | |
173 precisions for the numbers we are working with. Despite the various levels of precision a single subset\footnote{With the occasional optimization.} | |
174 of algorithms can be designed to accomodate them. | |
175 | |
176 By way of comparison a fixed or single precision operation would lose precision on various operations. For example, in | |
177 the decimal system with fixed precision $6 \cdot 7 = 2$. | |
178 | |
179 Essentially at the heart of computer based multiple precision arithmetic are the same long-hand algorithms taught in | |
180 schools to manually add, subtract, multiply and divide. | |
181 | |
182 \subsection{The Need for Multiple Precision Arithmetic} | |
183 The most prevalent need for multiple precision arithmetic, often referred to as ``bignum'' math, is within the implementation | |
184 of public-key cryptography algorithms. Algorithms such as RSA \cite{RSAREF} and Diffie-Hellman \cite{DHREF} require | |
185 integers of significant magnitude to resist known cryptanalytic attacks. For example, at the time of this writing a | |
186 typical RSA modulus would be at least greater than $10^{309}$. However, modern programming languages such as ISO C \cite{ISOC} and | |
187 Java \cite{JAVA} only provide instrinsic support for integers which are relatively small and single precision. | |
188 | |
189 \begin{figure}[!here] | |
190 \begin{center} | |
191 \begin{tabular}{|r|c|} | |
192 \hline \textbf{Data Type} & \textbf{Range} \\ | |
193 \hline char & $-128 \ldots 127$ \\ | |
194 \hline short & $-32768 \ldots 32767$ \\ | |
195 \hline long & $-2147483648 \ldots 2147483647$ \\ | |
196 \hline long long & $-9223372036854775808 \ldots 9223372036854775807$ \\ | |
197 \hline | |
198 \end{tabular} | |
199 \end{center} | |
200 \caption{Typical Data Types for the C Programming Language} | |
201 \label{fig:ISOC} | |
202 \end{figure} | |
203 | |
204 The largest data type guaranteed to be provided by the ISO C programming | |
205 language\footnote{As per the ISO C standard. However, each compiler vendor is allowed to augment the precision as they | |
206 see fit.} can only represent values up to $10^{19}$ as shown in figure \ref{fig:ISOC}. On its own the C language is | |
207 insufficient to accomodate the magnitude required for the problem at hand. An RSA modulus of magnitude $10^{19}$ could be | |
208 trivially factored\footnote{A Pollard-Rho factoring would take only $2^{16}$ time.} on the average desktop computer, | |
209 rendering any protocol based on the algorithm insecure. Multiple precision algorithms solve this very problem by | |
210 extending the range of representable integers while using single precision data types. | |
211 | |
212 Most advancements in fast multiple precision arithmetic stem from the need for faster and more efficient cryptographic | |
213 primitives. Faster modular reduction and exponentiation algorithms such as Barrett's algorithm, which have appeared in | |
214 various cryptographic journals, can render algorithms such as RSA and Diffie-Hellman more efficient. In fact, several | |
215 major companies such as RSA Security, Certicom and Entrust have built entire product lines on the implementation and | |
216 deployment of efficient algorithms. | |
217 | |
218 However, cryptography is not the only field of study that can benefit from fast multiple precision integer routines. | |
219 Another auxiliary use of multiple precision integers is high precision floating point data types. | |
220 The basic IEEE \cite{IEEE} standard floating point type is made up of an integer mantissa $q$, an exponent $e$ and a sign bit $s$. | |
221 Numbers are given in the form $n = q \cdot b^e \cdot -1^s$ where $b = 2$ is the most common base for IEEE. Since IEEE | |
222 floating point is meant to be implemented in hardware the precision of the mantissa is often fairly small | |
223 (\textit{23, 48 and 64 bits}). The mantissa is merely an integer and a multiple precision integer could be used to create | |
224 a mantissa of much larger precision than hardware alone can efficiently support. This approach could be useful where | |
225 scientific applications must minimize the total output error over long calculations. | |
226 | |
142 | 227 Yet another use for large integers is within arithmetic on polynomials of large characteristic (i.e. $GF(p)[x]$ for large $p$). |
19 | 228 In fact the library discussed within this text has already been used to form a polynomial basis library\footnote{See \url{http://poly.libtomcrypt.org} for more details.}. |
229 | |
230 \subsection{Benefits of Multiple Precision Arithmetic} | |
231 \index{precision} | |
232 The benefit of multiple precision representations over single or fixed precision representations is that | |
233 no precision is lost while representing the result of an operation which requires excess precision. For example, | |
234 the product of two $n$-bit integers requires at least $2n$ bits of precision to be represented faithfully. A multiple | |
235 precision algorithm would augment the precision of the destination to accomodate the result while a single precision system | |
236 would truncate excess bits to maintain a fixed level of precision. | |
237 | |
238 It is possible to implement algorithms which require large integers with fixed precision algorithms. For example, elliptic | |
239 curve cryptography (\textit{ECC}) is often implemented on smartcards by fixing the precision of the integers to the maximum | |
240 size the system will ever need. Such an approach can lead to vastly simpler algorithms which can accomodate the | |
241 integers required even if the host platform cannot natively accomodate them\footnote{For example, the average smartcard | |
242 processor has an 8 bit accumulator.}. However, as efficient as such an approach may be, the resulting source code is not | |
243 normally very flexible. It cannot, at runtime, accomodate inputs of higher magnitude than the designer anticipated. | |
244 | |
245 Multiple precision algorithms have the most overhead of any style of arithmetic. For the the most part the | |
246 overhead can be kept to a minimum with careful planning, but overall, it is not well suited for most memory starved | |
247 platforms. However, multiple precision algorithms do offer the most flexibility in terms of the magnitude of the | |
248 inputs. That is, the same algorithms based on multiple precision integers can accomodate any reasonable size input | |
249 without the designer's explicit forethought. This leads to lower cost of ownership for the code as it only has to | |
250 be written and tested once. | |
251 | |
252 \section{Purpose of This Text} | |
253 The purpose of this text is to instruct the reader regarding how to implement efficient multiple precision algorithms. | |
254 That is to not only explain a limited subset of the core theory behind the algorithms but also the various ``house keeping'' | |
255 elements that are neglected by authors of other texts on the subject. Several well reknowned texts \cite{TAOCPV2,HAC} | |
256 give considerably detailed explanations of the theoretical aspects of algorithms and often very little information | |
257 regarding the practical implementation aspects. | |
258 | |
259 In most cases how an algorithm is explained and how it is actually implemented are two very different concepts. For | |
260 example, the Handbook of Applied Cryptography (\textit{HAC}), algorithm 14.7 on page 594, gives a relatively simple | |
261 algorithm for performing multiple precision integer addition. However, the description lacks any discussion concerning | |
262 the fact that the two integer inputs may be of differing magnitudes. As a result the implementation is not as simple | |
263 as the text would lead people to believe. Similarly the division routine (\textit{algorithm 14.20, pp. 598}) does not | |
264 discuss how to handle sign or handle the dividend's decreasing magnitude in the main loop (\textit{step \#3}). | |
265 | |
266 Both texts also do not discuss several key optimal algorithms required such as ``Comba'' and Karatsuba multipliers | |
267 and fast modular inversion, which we consider practical oversights. These optimal algorithms are vital to achieve | |
268 any form of useful performance in non-trivial applications. | |
269 | |
270 To solve this problem the focus of this text is on the practical aspects of implementing a multiple precision integer | |
271 package. As a case study the ``LibTomMath''\footnote{Available at \url{http://math.libtomcrypt.org}} package is used | |
272 to demonstrate algorithms with real implementations\footnote{In the ISO C programming language.} that have been field | |
273 tested and work very well. The LibTomMath library is freely available on the Internet for all uses and this text | |
274 discusses a very large portion of the inner workings of the library. | |
275 | |
276 The algorithms that are presented will always include at least one ``pseudo-code'' description followed | |
277 by the actual C source code that implements the algorithm. The pseudo-code can be used to implement the same | |
278 algorithm in other programming languages as the reader sees fit. | |
279 | |
280 This text shall also serve as a walkthrough of the creation of multiple precision algorithms from scratch. Showing | |
281 the reader how the algorithms fit together as well as where to start on various taskings. | |
282 | |
283 \section{Discussion and Notation} | |
284 \subsection{Notation} | |
142 | 285 A multiple precision integer of $n$-digits shall be denoted as $x = (x_{n-1}, \ldots, x_1, x_0)_{ \beta }$ and represent |
19 | 286 the integer $x \equiv \sum_{i=0}^{n-1} x_i\beta^i$. The elements of the array $x$ are said to be the radix $\beta$ digits |
287 of the integer. For example, $x = (1,2,3)_{10}$ would represent the integer | |
288 $1\cdot 10^2 + 2\cdot10^1 + 3\cdot10^0 = 123$. | |
289 | |
290 \index{mp\_int} | |
291 The term ``mp\_int'' shall refer to a composite structure which contains the digits of the integer it represents, as well | |
292 as auxilary data required to manipulate the data. These additional members are discussed further in section | |
293 \ref{sec:MPINT}. For the purposes of this text a ``multiple precision integer'' and an ``mp\_int'' are assumed to be | |
294 synonymous. When an algorithm is specified to accept an mp\_int variable it is assumed the various auxliary data members | |
295 are present as well. An expression of the type \textit{variablename.item} implies that it should evaluate to the | |
296 member named ``item'' of the variable. For example, a string of characters may have a member ``length'' which would | |
297 evaluate to the number of characters in the string. If the string $a$ equals ``hello'' then it follows that | |
298 $a.length = 5$. | |
299 | |
300 For certain discussions more generic algorithms are presented to help the reader understand the final algorithm used | |
301 to solve a given problem. When an algorithm is described as accepting an integer input it is assumed the input is | |
302 a plain integer with no additional multiple-precision members. That is, algorithms that use integers as opposed to | |
303 mp\_ints as inputs do not concern themselves with the housekeeping operations required such as memory management. These | |
304 algorithms will be used to establish the relevant theory which will subsequently be used to describe a multiple | |
305 precision algorithm to solve the same problem. | |
306 | |
307 \subsection{Precision Notation} | |
142 | 308 The variable $\beta$ represents the radix of a single digit of a multiple precision integer and |
309 must be of the form $q^p$ for $q, p \in \Z^+$. A single precision variable must be able to represent integers in | |
310 the range $0 \le x < q \beta$ while a double precision variable must be able to represent integers in the range | |
311 $0 \le x < q \beta^2$. The extra radix-$q$ factor allows additions and subtractions to proceed without truncation of the | |
312 carry. Since all modern computers are binary, it is assumed that $q$ is two. | |
19 | 313 |
314 \index{mp\_digit} \index{mp\_word} | |
315 Within the source code that will be presented for each algorithm, the data type \textbf{mp\_digit} will represent | |
316 a single precision integer type, while, the data type \textbf{mp\_word} will represent a double precision integer type. In | |
317 several algorithms (notably the Comba routines) temporary results will be stored in arrays of double precision mp\_words. | |
318 For the purposes of this text $x_j$ will refer to the $j$'th digit of a single precision array and $\hat x_j$ will refer to | |
319 the $j$'th digit of a double precision array. Whenever an expression is to be assigned to a double precision | |
320 variable it is assumed that all single precision variables are promoted to double precision during the evaluation. | |
321 Expressions that are assigned to a single precision variable are truncated to fit within the precision of a single | |
322 precision data type. | |
323 | |
324 For example, if $\beta = 10^2$ a single precision data type may represent a value in the | |
325 range $0 \le x < 10^3$, while a double precision data type may represent a value in the range $0 \le x < 10^5$. Let | |
326 $a = 23$ and $b = 49$ represent two single precision variables. The single precision product shall be written | |
327 as $c \leftarrow a \cdot b$ while the double precision product shall be written as $\hat c \leftarrow a \cdot b$. | |
328 In this particular case, $\hat c = 1127$ and $c = 127$. The most significant digit of the product would not fit | |
329 in a single precision data type and as a result $c \ne \hat c$. | |
330 | |
331 \subsection{Algorithm Inputs and Outputs} | |
332 Within the algorithm descriptions all variables are assumed to be scalars of either single or double precision | |
333 as indicated. The only exception to this rule is when variables have been indicated to be of type mp\_int. This | |
334 distinction is important as scalars are often used as array indicies and various other counters. | |
335 | |
336 \subsection{Mathematical Expressions} | |
337 The $\lfloor \mbox{ } \rfloor$ brackets imply an expression truncated to an integer not greater than the expression | |
338 itself. For example, $\lfloor 5.7 \rfloor = 5$. Similarly the $\lceil \mbox{ } \rceil$ brackets imply an expression | |
339 rounded to an integer not less than the expression itself. For example, $\lceil 5.1 \rceil = 6$. Typically when | |
340 the $/$ division symbol is used the intention is to perform an integer division with truncation. For example, | |
341 $5/2 = 2$ which will often be written as $\lfloor 5/2 \rfloor = 2$ for clarity. When an expression is written as a | |
342 fraction a real value division is implied, for example ${5 \over 2} = 2.5$. | |
343 | |
142 | 344 The norm of a multiple precision integer, for example $\vert \vert x \vert \vert$, will be used to represent the number of digits in the representation |
19 | 345 of the integer. For example, $\vert \vert 123 \vert \vert = 3$ and $\vert \vert 79452 \vert \vert = 5$. |
346 | |
347 \subsection{Work Effort} | |
348 \index{big-Oh} | |
349 To measure the efficiency of the specified algorithms, a modified big-Oh notation is used. In this system all | |
350 single precision operations are considered to have the same cost\footnote{Except where explicitly noted.}. | |
351 That is a single precision addition, multiplication and division are assumed to take the same time to | |
352 complete. While this is generally not true in practice, it will simplify the discussions considerably. | |
353 | |
354 Some algorithms have slight advantages over others which is why some constants will not be removed in | |
355 the notation. For example, a normal baseline multiplication (section \ref{sec:basemult}) requires $O(n^2)$ work while a | |
356 baseline squaring (section \ref{sec:basesquare}) requires $O({{n^2 + n}\over 2})$ work. In standard big-Oh notation these | |
357 would both be said to be equivalent to $O(n^2)$. However, | |
358 in the context of the this text this is not the case as the magnitude of the inputs will typically be rather small. As a | |
359 result small constant factors in the work effort will make an observable difference in algorithm efficiency. | |
360 | |
361 All of the algorithms presented in this text have a polynomial time work level. That is, of the form | |
362 $O(n^k)$ for $n, k \in \Z^{+}$. This will help make useful comparisons in terms of the speed of the algorithms and how | |
363 various optimizations will help pay off in the long run. | |
364 | |
365 \section{Exercises} | |
366 Within the more advanced chapters a section will be set aside to give the reader some challenging exercises related to | |
367 the discussion at hand. These exercises are not designed to be prize winning problems, but instead to be thought | |
368 provoking. Wherever possible the problems are forward minded, stating problems that will be answered in subsequent | |
369 chapters. The reader is encouraged to finish the exercises as they appear to get a better understanding of the | |
370 subject material. | |
371 | |
372 That being said, the problems are designed to affirm knowledge of a particular subject matter. Students in particular | |
373 are encouraged to verify they can answer the problems correctly before moving on. | |
374 | |
375 Similar to the exercises of \cite[pp. ix]{TAOCPV2} these exercises are given a scoring system based on the difficulty of | |
376 the problem. However, unlike \cite{TAOCPV2} the problems do not get nearly as hard. The scoring of these | |
377 exercises ranges from one (the easiest) to five (the hardest). The following table sumarizes the | |
378 scoring system used. | |
379 | |
380 \begin{figure}[here] | |
381 \begin{center} | |
382 \begin{small} | |
383 \begin{tabular}{|c|l|} | |
384 \hline $\left [ 1 \right ]$ & An easy problem that should only take the reader a manner of \\ | |
385 & minutes to solve. Usually does not involve much computer time \\ | |
386 & to solve. \\ | |
387 \hline $\left [ 2 \right ]$ & An easy problem that involves a marginal amount of computer \\ | |
388 & time usage. Usually requires a program to be written to \\ | |
389 & solve the problem. \\ | |
390 \hline $\left [ 3 \right ]$ & A moderately hard problem that requires a non-trivial amount \\ | |
391 & of work. Usually involves trivial research and development of \\ | |
392 & new theory from the perspective of a student. \\ | |
393 \hline $\left [ 4 \right ]$ & A moderately hard problem that involves a non-trivial amount \\ | |
394 & of work and research, the solution to which will demonstrate \\ | |
395 & a higher mastery of the subject matter. \\ | |
396 \hline $\left [ 5 \right ]$ & A hard problem that involves concepts that are difficult for a \\ | |
397 & novice to solve. Solutions to these problems will demonstrate a \\ | |
398 & complete mastery of the given subject. \\ | |
399 \hline | |
400 \end{tabular} | |
401 \end{small} | |
402 \end{center} | |
403 \caption{Exercise Scoring System} | |
404 \end{figure} | |
405 | |
406 Problems at the first level are meant to be simple questions that the reader can answer quickly without programming a solution or | |
407 devising new theory. These problems are quick tests to see if the material is understood. Problems at the second level | |
408 are also designed to be easy but will require a program or algorithm to be implemented to arrive at the answer. These | |
409 two levels are essentially entry level questions. | |
410 | |
411 Problems at the third level are meant to be a bit more difficult than the first two levels. The answer is often | |
412 fairly obvious but arriving at an exacting solution requires some thought and skill. These problems will almost always | |
413 involve devising a new algorithm or implementing a variation of another algorithm previously presented. Readers who can | |
414 answer these questions will feel comfortable with the concepts behind the topic at hand. | |
415 | |
416 Problems at the fourth level are meant to be similar to those of the level three questions except they will require | |
417 additional research to be completed. The reader will most likely not know the answer right away, nor will the text provide | |
418 the exact details of the answer until a subsequent chapter. | |
419 | |
420 Problems at the fifth level are meant to be the hardest | |
421 problems relative to all the other problems in the chapter. People who can correctly answer fifth level problems have a | |
422 mastery of the subject matter at hand. | |
423 | |
424 Often problems will be tied together. The purpose of this is to start a chain of thought that will be discussed in future chapters. The reader | |
425 is encouraged to answer the follow-up problems and try to draw the relevance of problems. | |
426 | |
427 \section{Introduction to LibTomMath} | |
428 | |
429 \subsection{What is LibTomMath?} | |
430 LibTomMath is a free and open source multiple precision integer library written entirely in portable ISO C. By portable it | |
431 is meant that the library does not contain any code that is computer platform dependent or otherwise problematic to use on | |
432 any given platform. | |
433 | |
434 The library has been successfully tested under numerous operating systems including Unix\footnote{All of these | |
435 trademarks belong to their respective rightful owners.}, MacOS, Windows, Linux, PalmOS and on standalone hardware such | |
436 as the Gameboy Advance. The library is designed to contain enough functionality to be able to develop applications such | |
437 as public key cryptosystems and still maintain a relatively small footprint. | |
438 | |
439 \subsection{Goals of LibTomMath} | |
440 | |
441 Libraries which obtain the most efficiency are rarely written in a high level programming language such as C. However, | |
442 even though this library is written entirely in ISO C, considerable care has been taken to optimize the algorithm implementations within the | |
443 library. Specifically the code has been written to work well with the GNU C Compiler (\textit{GCC}) on both x86 and ARM | |
444 processors. Wherever possible, highly efficient algorithms, such as Karatsuba multiplication, sliding window | |
445 exponentiation and Montgomery reduction have been provided to make the library more efficient. | |
446 | |
447 Even with the nearly optimal and specialized algorithms that have been included the Application Programing Interface | |
448 (\textit{API}) has been kept as simple as possible. Often generic place holder routines will make use of specialized | |
449 algorithms automatically without the developer's specific attention. One such example is the generic multiplication | |
450 algorithm \textbf{mp\_mul()} which will automatically use Toom--Cook, Karatsuba, Comba or baseline multiplication | |
451 based on the magnitude of the inputs and the configuration of the library. | |
452 | |
453 Making LibTomMath as efficient as possible is not the only goal of the LibTomMath project. Ideally the library should | |
454 be source compatible with another popular library which makes it more attractive for developers to use. In this case the | |
455 MPI library was used as a API template for all the basic functions. MPI was chosen because it is another library that fits | |
456 in the same niche as LibTomMath. Even though LibTomMath uses MPI as the template for the function names and argument | |
457 passing conventions, it has been written from scratch by Tom St Denis. | |
458 | |
459 The project is also meant to act as a learning tool for students, the logic being that no easy-to-follow ``bignum'' | |
460 library exists which can be used to teach computer science students how to perform fast and reliable multiple precision | |
461 integer arithmetic. To this end the source code has been given quite a few comments and algorithm discussion points. | |
462 | |
463 \section{Choice of LibTomMath} | |
464 LibTomMath was chosen as the case study of this text not only because the author of both projects is one and the same but | |
465 for more worthy reasons. Other libraries such as GMP \cite{GMP}, MPI \cite{MPI}, LIP \cite{LIP} and OpenSSL | |
466 \cite{OPENSSL} have multiple precision integer arithmetic routines but would not be ideal for this text for | |
467 reasons that will be explained in the following sub-sections. | |
468 | |
469 \subsection{Code Base} | |
470 The LibTomMath code base is all portable ISO C source code. This means that there are no platform dependent conditional | |
471 segments of code littered throughout the source. This clean and uncluttered approach to the library means that a | |
472 developer can more readily discern the true intent of a given section of source code without trying to keep track of | |
473 what conditional code will be used. | |
474 | |
475 The code base of LibTomMath is well organized. Each function is in its own separate source code file | |
476 which allows the reader to find a given function very quickly. On average there are $76$ lines of code per source | |
477 file which makes the source very easily to follow. By comparison MPI and LIP are single file projects making code tracing | |
478 very hard. GMP has many conditional code segments which also hinder tracing. | |
479 | |
480 When compiled with GCC for the x86 processor and optimized for speed the entire library is approximately $100$KiB\footnote{The notation ``KiB'' means $2^{10}$ octets, similarly ``MiB'' means $2^{20}$ octets.} | |
481 which is fairly small compared to GMP (over $250$KiB). LibTomMath is slightly larger than MPI (which compiles to about | |
482 $50$KiB) but LibTomMath is also much faster and more complete than MPI. | |
483 | |
484 \subsection{API Simplicity} | |
485 LibTomMath is designed after the MPI library and shares the API design. Quite often programs that use MPI will build | |
486 with LibTomMath without change. The function names correlate directly to the action they perform. Almost all of the | |
487 functions share the same parameter passing convention. The learning curve is fairly shallow with the API provided | |
488 which is an extremely valuable benefit for the student and developer alike. | |
489 | |
490 The LIP library is an example of a library with an API that is awkward to work with. LIP uses function names that are often ``compressed'' to | |
491 illegible short hand. LibTomMath does not share this characteristic. | |
492 | |
493 The GMP library also does not return error codes. Instead it uses a POSIX.1 \cite{POSIX1} signal system where errors | |
494 are signaled to the host application. This happens to be the fastest approach but definitely not the most versatile. In | |
495 effect a math error (i.e. invalid input, heap error, etc) can cause a program to stop functioning which is definitely | |
496 undersireable in many situations. | |
497 | |
498 \subsection{Optimizations} | |
499 While LibTomMath is certainly not the fastest library (GMP often beats LibTomMath by a factor of two) it does | |
500 feature a set of optimal algorithms for tasks such as modular reduction, exponentiation, multiplication and squaring. GMP | |
501 and LIP also feature such optimizations while MPI only uses baseline algorithms with no optimizations. GMP lacks a few | |
502 of the additional modular reduction optimizations that LibTomMath features\footnote{At the time of this writing GMP | |
503 only had Barrett and Montgomery modular reduction algorithms.}. | |
504 | |
505 LibTomMath is almost always an order of magnitude faster than the MPI library at computationally expensive tasks such as modular | |
506 exponentiation. In the grand scheme of ``bignum'' libraries LibTomMath is faster than the average library and usually | |
507 slower than the best libraries such as GMP and OpenSSL by only a small factor. | |
508 | |
509 \subsection{Portability and Stability} | |
510 LibTomMath will build ``out of the box'' on any platform equipped with a modern version of the GNU C Compiler | |
511 (\textit{GCC}). This means that without changes the library will build without configuration or setting up any | |
512 variables. LIP and MPI will build ``out of the box'' as well but have numerous known bugs. Most notably the author of | |
513 MPI has recently stopped working on his library and LIP has long since been discontinued. | |
514 | |
515 GMP requires a configuration script to run and will not build out of the box. GMP and LibTomMath are still in active | |
516 development and are very stable across a variety of platforms. | |
517 | |
518 \subsection{Choice} | |
519 LibTomMath is a relatively compact, well documented, highly optimized and portable library which seems only natural for | |
520 the case study of this text. Various source files from the LibTomMath project will be included within the text. However, | |
521 the reader is encouraged to download their own copy of the library to actually be able to work with the library. | |
522 | |
523 \chapter{Getting Started} | |
524 \section{Library Basics} | |
525 The trick to writing any useful library of source code is to build a solid foundation and work outwards from it. First, | |
526 a problem along with allowable solution parameters should be identified and analyzed. In this particular case the | |
527 inability to accomodate multiple precision integers is the problem. Futhermore, the solution must be written | |
528 as portable source code that is reasonably efficient across several different computer platforms. | |
529 | |
530 After a foundation is formed the remainder of the library can be designed and implemented in a hierarchical fashion. | |
531 That is, to implement the lowest level dependencies first and work towards the most abstract functions last. For example, | |
532 before implementing a modular exponentiation algorithm one would implement a modular reduction algorithm. | |
533 By building outwards from a base foundation instead of using a parallel design methodology the resulting project is | |
534 highly modular. Being highly modular is a desirable property of any project as it often means the resulting product | |
535 has a small footprint and updates are easy to perform. | |
536 | |
142 | 537 Usually when I start a project I will begin with the header files. I define the data types I think I will need and |
19 | 538 prototype the initial functions that are not dependent on other functions (within the library). After I |
539 implement these base functions I prototype more dependent functions and implement them. The process repeats until | |
540 I implement all of the functions I require. For example, in the case of LibTomMath I implemented functions such as | |
541 mp\_init() well before I implemented mp\_mul() and even further before I implemented mp\_exptmod(). As an example as to | |
542 why this design works note that the Karatsuba and Toom-Cook multipliers were written \textit{after} the | |
543 dependent function mp\_exptmod() was written. Adding the new multiplication algorithms did not require changes to the | |
544 mp\_exptmod() function itself and lowered the total cost of ownership (\textit{so to speak}) and of development | |
545 for new algorithms. This methodology allows new algorithms to be tested in a complete framework with relative ease. | |
546 | |
547 FIGU,design_process,Design Flow of the First Few Original LibTomMath Functions. | |
548 | |
549 Only after the majority of the functions were in place did I pursue a less hierarchical approach to auditing and optimizing | |
550 the source code. For example, one day I may audit the multipliers and the next day the polynomial basis functions. | |
551 | |
552 It only makes sense to begin the text with the preliminary data types and support algorithms required as well. | |
553 This chapter discusses the core algorithms of the library which are the dependents for every other algorithm. | |
554 | |
555 \section{What is a Multiple Precision Integer?} | |
556 Recall that most programming languages, in particular ISO C \cite{ISOC}, only have fixed precision data types that on their own cannot | |
557 be used to represent values larger than their precision will allow. The purpose of multiple precision algorithms is | |
558 to use fixed precision data types to create and manipulate multiple precision integers which may represent values | |
559 that are very large. | |
560 | |
561 As a well known analogy, school children are taught how to form numbers larger than nine by prepending more radix ten digits. In the decimal system | |
562 the largest single digit value is $9$. However, by concatenating digits together larger numbers may be represented. Newly prepended digits | |
563 (\textit{to the left}) are said to be in a different power of ten column. That is, the number $123$ can be described as having a $1$ in the hundreds | |
564 column, $2$ in the tens column and $3$ in the ones column. Or more formally $123 = 1 \cdot 10^2 + 2 \cdot 10^1 + 3 \cdot 10^0$. Computer based | |
565 multiple precision arithmetic is essentially the same concept. Larger integers are represented by adjoining fixed | |
566 precision computer words with the exception that a different radix is used. | |
567 | |
568 What most people probably do not think about explicitly are the various other attributes that describe a multiple precision | |
569 integer. For example, the integer $154_{10}$ has two immediately obvious properties. First, the integer is positive, | |
570 that is the sign of this particular integer is positive as opposed to negative. Second, the integer has three digits in | |
571 its representation. There is an additional property that the integer posesses that does not concern pencil-and-paper | |
572 arithmetic. The third property is how many digits placeholders are available to hold the integer. | |
573 | |
574 The human analogy of this third property is ensuring there is enough space on the paper to write the integer. For example, | |
575 if one starts writing a large number too far to the right on a piece of paper they will have to erase it and move left. | |
576 Similarly, computer algorithms must maintain strict control over memory usage to ensure that the digits of an integer | |
577 will not exceed the allowed boundaries. These three properties make up what is known as a multiple precision | |
578 integer or mp\_int for short. | |
579 | |
580 \subsection{The mp\_int Structure} | |
581 \label{sec:MPINT} | |
582 The mp\_int structure is the ISO C based manifestation of what represents a multiple precision integer. The ISO C standard does not provide for | |
583 any such data type but it does provide for making composite data types known as structures. The following is the structure definition | |
584 used within LibTomMath. | |
585 | |
586 \index{mp\_int} | |
142 | 587 \begin{figure}[here] |
588 \begin{center} | |
589 \begin{small} | |
590 %\begin{verbatim} | |
591 \begin{tabular}{|l|} | |
592 \hline | |
593 typedef struct \{ \\ | |
594 \hspace{3mm}int used, alloc, sign;\\ | |
595 \hspace{3mm}mp\_digit *dp;\\ | |
596 \} \textbf{mp\_int}; \\ | |
597 \hline | |
598 \end{tabular} | |
599 %\end{verbatim} | |
600 \end{small} | |
601 \caption{The mp\_int Structure} | |
602 \label{fig:mpint} | |
603 \end{center} | |
604 \end{figure} | |
605 | |
606 The mp\_int structure (fig. \ref{fig:mpint}) can be broken down as follows. | |
19 | 607 |
608 \begin{enumerate} | |
609 \item The \textbf{used} parameter denotes how many digits of the array \textbf{dp} contain the digits used to represent | |
610 a given integer. The \textbf{used} count must be positive (or zero) and may not exceed the \textbf{alloc} count. | |
611 | |
612 \item The \textbf{alloc} parameter denotes how | |
613 many digits are available in the array to use by functions before it has to increase in size. When the \textbf{used} count | |
614 of a result would exceed the \textbf{alloc} count all of the algorithms will automatically increase the size of the | |
615 array to accommodate the precision of the result. | |
616 | |
617 \item The pointer \textbf{dp} points to a dynamically allocated array of digits that represent the given multiple | |
618 precision integer. It is padded with $(\textbf{alloc} - \textbf{used})$ zero digits. The array is maintained in a least | |
619 significant digit order. As a pencil and paper analogy the array is organized such that the right most digits are stored | |
620 first starting at the location indexed by zero\footnote{In C all arrays begin at zero.} in the array. For example, | |
621 if \textbf{dp} contains $\lbrace a, b, c, \ldots \rbrace$ where \textbf{dp}$_0 = a$, \textbf{dp}$_1 = b$, \textbf{dp}$_2 = c$, $\ldots$ then | |
622 it would represent the integer $a + b\beta + c\beta^2 + \ldots$ | |
623 | |
624 \index{MP\_ZPOS} \index{MP\_NEG} | |
625 \item The \textbf{sign} parameter denotes the sign as either zero/positive (\textbf{MP\_ZPOS}) or negative (\textbf{MP\_NEG}). | |
626 \end{enumerate} | |
627 | |
628 \subsubsection{Valid mp\_int Structures} | |
629 Several rules are placed on the state of an mp\_int structure and are assumed to be followed for reasons of efficiency. | |
630 The only exceptions are when the structure is passed to initialization functions such as mp\_init() and mp\_init\_copy(). | |
631 | |
632 \begin{enumerate} | |
633 \item The value of \textbf{alloc} may not be less than one. That is \textbf{dp} always points to a previously allocated | |
634 array of digits. | |
635 \item The value of \textbf{used} may not exceed \textbf{alloc} and must be greater than or equal to zero. | |
636 \item The value of \textbf{used} implies the digit at index $(used - 1)$ of the \textbf{dp} array is non-zero. That is, | |
637 leading zero digits in the most significant positions must be trimmed. | |
638 \begin{enumerate} | |
639 \item Digits in the \textbf{dp} array at and above the \textbf{used} location must be zero. | |
640 \end{enumerate} | |
641 \item The value of \textbf{sign} must be \textbf{MP\_ZPOS} if \textbf{used} is zero; | |
642 this represents the mp\_int value of zero. | |
643 \end{enumerate} | |
644 | |
645 \section{Argument Passing} | |
646 A convention of argument passing must be adopted early on in the development of any library. Making the function | |
647 prototypes consistent will help eliminate many headaches in the future as the library grows to significant complexity. | |
648 In LibTomMath the multiple precision integer functions accept parameters from left to right as pointers to mp\_int | |
649 structures. That means that the source (input) operands are placed on the left and the destination (output) on the right. | |
650 Consider the following examples. | |
651 | |
652 \begin{verbatim} | |
653 mp_mul(&a, &b, &c); /* c = a * b */ | |
654 mp_add(&a, &b, &a); /* a = a + b */ | |
655 mp_sqr(&a, &b); /* b = a * a */ | |
656 \end{verbatim} | |
657 | |
658 The left to right order is a fairly natural way to implement the functions since it lets the developer read aloud the | |
659 functions and make sense of them. For example, the first function would read ``multiply a and b and store in c''. | |
660 | |
661 Certain libraries (\textit{LIP by Lenstra for instance}) accept parameters the other way around, to mimic the order | |
662 of assignment expressions. That is, the destination (output) is on the left and arguments (inputs) are on the right. In | |
663 truth, it is entirely a matter of preference. In the case of LibTomMath the convention from the MPI library has been | |
664 adopted. | |
665 | |
666 Another very useful design consideration, provided for in LibTomMath, is whether to allow argument sources to also be a | |
667 destination. For example, the second example (\textit{mp\_add}) adds $a$ to $b$ and stores in $a$. This is an important | |
668 feature to implement since it allows the calling functions to cut down on the number of variables it must maintain. | |
669 However, to implement this feature specific care has to be given to ensure the destination is not modified before the | |
670 source is fully read. | |
671 | |
672 \section{Return Values} | |
673 A well implemented application, no matter what its purpose, should trap as many runtime errors as possible and return them | |
674 to the caller. By catching runtime errors a library can be guaranteed to prevent undefined behaviour. However, the end | |
675 developer can still manage to cause a library to crash. For example, by passing an invalid pointer an application may | |
676 fault by dereferencing memory not owned by the application. | |
677 | |
678 In the case of LibTomMath the only errors that are checked for are related to inappropriate inputs (division by zero for | |
679 instance) and memory allocation errors. It will not check that the mp\_int passed to any function is valid nor | |
680 will it check pointers for validity. Any function that can cause a runtime error will return an error code as an | |
142 | 681 \textbf{int} data type with one of the following values (fig \ref{fig:errcodes}). |
19 | 682 |
683 \index{MP\_OKAY} \index{MP\_VAL} \index{MP\_MEM} | |
142 | 684 \begin{figure}[here] |
19 | 685 \begin{center} |
686 \begin{tabular}{|l|l|} | |
687 \hline \textbf{Value} & \textbf{Meaning} \\ | |
688 \hline \textbf{MP\_OKAY} & The function was successful \\ | |
689 \hline \textbf{MP\_VAL} & One of the input value(s) was invalid \\ | |
690 \hline \textbf{MP\_MEM} & The function ran out of heap memory \\ | |
691 \hline | |
692 \end{tabular} | |
693 \end{center} | |
142 | 694 \caption{LibTomMath Error Codes} |
695 \label{fig:errcodes} | |
696 \end{figure} | |
19 | 697 |
698 When an error is detected within a function it should free any memory it allocated, often during the initialization of | |
699 temporary mp\_ints, and return as soon as possible. The goal is to leave the system in the same state it was when the | |
700 function was called. Error checking with this style of API is fairly simple. | |
701 | |
702 \begin{verbatim} | |
703 int err; | |
704 if ((err = mp_add(&a, &b, &c)) != MP_OKAY) { | |
705 printf("Error: %s\n", mp_error_to_string(err)); | |
706 exit(EXIT_FAILURE); | |
707 } | |
708 \end{verbatim} | |
709 | |
710 The GMP \cite{GMP} library uses C style \textit{signals} to flag errors which is of questionable use. Not all errors are fatal | |
711 and it was not deemed ideal by the author of LibTomMath to force developers to have signal handlers for such cases. | |
712 | |
713 \section{Initialization and Clearing} | |
714 The logical starting point when actually writing multiple precision integer functions is the initialization and | |
715 clearing of the mp\_int structures. These two algorithms will be used by the majority of the higher level algorithms. | |
716 | |
717 Given the basic mp\_int structure an initialization routine must first allocate memory to hold the digits of | |
718 the integer. Often it is optimal to allocate a sufficiently large pre-set number of digits even though | |
719 the initial integer will represent zero. If only a single digit were allocated quite a few subsequent re-allocations | |
720 would occur when operations are performed on the integers. There is a tradeoff between how many default digits to allocate | |
721 and how many re-allocations are tolerable. Obviously allocating an excessive amount of digits initially will waste | |
722 memory and become unmanageable. | |
723 | |
724 If the memory for the digits has been successfully allocated then the rest of the members of the structure must | |
725 be initialized. Since the initial state of an mp\_int is to represent the zero integer, the allocated digits must be set | |
726 to zero. The \textbf{used} count set to zero and \textbf{sign} set to \textbf{MP\_ZPOS}. | |
727 | |
728 \subsection{Initializing an mp\_int} | |
729 An mp\_int is said to be initialized if it is set to a valid, preferably default, state such that all of the members of the | |
730 structure are set to valid values. The mp\_init algorithm will perform such an action. | |
731 | |
142 | 732 \index{mp\_init} |
19 | 733 \begin{figure}[here] |
734 \begin{center} | |
735 \begin{tabular}{l} | |
736 \hline Algorithm \textbf{mp\_init}. \\ | |
737 \textbf{Input}. An mp\_int $a$ \\ | |
738 \textbf{Output}. Allocate memory and initialize $a$ to a known valid mp\_int state. \\ | |
739 \hline \\ | |
740 1. Allocate memory for \textbf{MP\_PREC} digits. \\ | |
741 2. If the allocation failed return(\textit{MP\_MEM}) \\ | |
742 3. for $n$ from $0$ to $MP\_PREC - 1$ do \\ | |
743 \hspace{3mm}3.1 $a_n \leftarrow 0$\\ | |
744 4. $a.sign \leftarrow MP\_ZPOS$\\ | |
745 5. $a.used \leftarrow 0$\\ | |
746 6. $a.alloc \leftarrow MP\_PREC$\\ | |
747 7. Return(\textit{MP\_OKAY})\\ | |
748 \hline | |
749 \end{tabular} | |
750 \end{center} | |
751 \caption{Algorithm mp\_init} | |
752 \end{figure} | |
753 | |
754 \textbf{Algorithm mp\_init.} | |
142 | 755 The purpose of this function is to initialize an mp\_int structure so that the rest of the library can properly |
756 manipulte it. It is assumed that the input may not have had any of its members previously initialized which is certainly | |
757 a valid assumption if the input resides on the stack. | |
758 | |
759 Before any of the members such as \textbf{sign}, \textbf{used} or \textbf{alloc} are initialized the memory for | |
760 the digits is allocated. If this fails the function returns before setting any of the other members. The \textbf{MP\_PREC} | |
761 name represents a constant\footnote{Defined in the ``tommath.h'' header file within LibTomMath.} | |
762 used to dictate the minimum precision of newly initialized mp\_int integers. Ideally, it is at least equal to the smallest | |
763 precision number you'll be working with. | |
764 | |
765 Allocating a block of digits at first instead of a single digit has the benefit of lowering the number of usually slow | |
766 heap operations later functions will have to perform in the future. If \textbf{MP\_PREC} is set correctly the slack | |
767 memory and the number of heap operations will be trivial. | |
768 | |
769 Once the allocation has been made the digits have to be set to zero as well as the \textbf{used}, \textbf{sign} and | |
770 \textbf{alloc} members initialized. This ensures that the mp\_int will always represent the default state of zero regardless | |
771 of the original condition of the input. | |
19 | 772 |
773 \textbf{Remark.} | |
774 This function introduces the idiosyncrasy that all iterative loops, commonly initiated with the ``for'' keyword, iterate incrementally | |
775 when the ``to'' keyword is placed between two expressions. For example, ``for $a$ from $b$ to $c$ do'' means that | |
776 a subsequent expression (or body of expressions) are to be evaluated upto $c - b$ times so long as $b \le c$. In each | |
777 iteration the variable $a$ is substituted for a new integer that lies inclusively between $b$ and $c$. If $b > c$ occured | |
778 the loop would not iterate. By contrast if the ``downto'' keyword were used in place of ``to'' the loop would iterate | |
779 decrementally. | |
780 | |
781 EXAM,bn_mp_init.c | |
782 | |
783 One immediate observation of this initializtion function is that it does not return a pointer to a mp\_int structure. It | |
784 is assumed that the caller has already allocated memory for the mp\_int structure, typically on the application stack. The | |
785 call to mp\_init() is used only to initialize the members of the structure to a known default state. | |
786 | |
142 | 787 Here we see (line @23,XMALLOC@) the memory allocation is performed first. This allows us to exit cleanly and quickly |
788 if there is an error. If the allocation fails the routine will return \textbf{MP\_MEM} to the caller to indicate there | |
789 was a memory error. The function XMALLOC is what actually allocates the memory. Technically XMALLOC is not a function | |
790 but a macro defined in ``tommath.h``. By default, XMALLOC will evaluate to malloc() which is the C library's built--in | |
791 memory allocation routine. | |
792 | |
793 In order to assure the mp\_int is in a known state the digits must be set to zero. On most platforms this could have been | |
794 accomplished by using calloc() instead of malloc(). However, to correctly initialize a integer type to a given value in a | |
795 portable fashion you have to actually assign the value. The for loop (line @28,for@) performs this required | |
796 operation. | |
797 | |
798 After the memory has been successfully initialized the remainder of the members are initialized | |
19 | 799 (lines @29,used@ through @31,sign@) to their respective default states. At this point the algorithm has succeeded and |
142 | 800 a success code is returned to the calling function. If this function returns \textbf{MP\_OKAY} it is safe to assume the |
801 mp\_int structure has been properly initialized and is safe to use with other functions within the library. | |
19 | 802 |
803 \subsection{Clearing an mp\_int} | |
804 When an mp\_int is no longer required by the application, the memory that has been allocated for its digits must be | |
805 returned to the application's memory pool with the mp\_clear algorithm. | |
806 | |
807 \begin{figure}[here] | |
808 \begin{center} | |
809 \begin{tabular}{l} | |
810 \hline Algorithm \textbf{mp\_clear}. \\ | |
811 \textbf{Input}. An mp\_int $a$ \\ | |
142 | 812 \textbf{Output}. The memory for $a$ shall be deallocated. \\ |
19 | 813 \hline \\ |
814 1. If $a$ has been previously freed then return(\textit{MP\_OKAY}). \\ | |
815 2. for $n$ from 0 to $a.used - 1$ do \\ | |
816 \hspace{3mm}2.1 $a_n \leftarrow 0$ \\ | |
817 3. Free the memory allocated for the digits of $a$. \\ | |
818 4. $a.used \leftarrow 0$ \\ | |
819 5. $a.alloc \leftarrow 0$ \\ | |
820 6. $a.sign \leftarrow MP\_ZPOS$ \\ | |
821 7. Return(\textit{MP\_OKAY}). \\ | |
822 \hline | |
823 \end{tabular} | |
824 \end{center} | |
825 \caption{Algorithm mp\_clear} | |
826 \end{figure} | |
827 | |
828 \textbf{Algorithm mp\_clear.} | |
142 | 829 This algorithm accomplishes two goals. First, it clears the digits and the other mp\_int members. This ensures that |
830 if a developer accidentally re-uses a cleared structure it is less likely to cause problems. The second goal | |
831 is to free the allocated memory. | |
832 | |
833 The logic behind the algorithm is extended by marking cleared mp\_int structures so that subsequent calls to this | |
834 algorithm will not try to free the memory multiple times. Cleared mp\_ints are detectable by having a pre-defined invalid | |
835 digit pointer \textbf{dp} setting. | |
836 | |
837 Once an mp\_int has been cleared the mp\_int structure is no longer in a valid state for any other algorithm | |
19 | 838 with the exception of algorithms mp\_init, mp\_init\_copy, mp\_init\_size and mp\_clear. |
839 | |
840 EXAM,bn_mp_clear.c | |
841 | |
142 | 842 The algorithm only operates on the mp\_int if it hasn't been previously cleared. The if statement (line @23,a->dp != NULL@) |
843 checks to see if the \textbf{dp} member is not \textbf{NULL}. If the mp\_int is a valid mp\_int then \textbf{dp} cannot be | |
844 \textbf{NULL} in which case the if statement will evaluate to true. | |
845 | |
846 The digits of the mp\_int are cleared by the for loop (line @25,for@) which assigns a zero to every digit. Similar to mp\_init() | |
847 the digits are assigned zero instead of using block memory operations (such as memset()) since this is more portable. | |
848 | |
849 The digits are deallocated off the heap via the XFREE macro. Similar to XMALLOC the XFREE macro actually evaluates to | |
850 a standard C library function. In this case the free() function. Since free() only deallocates the memory the pointer | |
851 still has to be reset to \textbf{NULL} manually (line @33,NULL@). | |
852 | |
853 Now that the digits have been cleared and deallocated the other members are set to their final values (lines @34,= 0@ and @35,ZPOS@). | |
19 | 854 |
855 \section{Maintenance Algorithms} | |
856 | |
857 The previous sections describes how to initialize and clear an mp\_int structure. To further support operations | |
858 that are to be performed on mp\_int structures (such as addition and multiplication) the dependent algorithms must be | |
859 able to augment the precision of an mp\_int and | |
860 initialize mp\_ints with differing initial conditions. | |
861 | |
862 These algorithms complete the set of low level algorithms required to work with mp\_int structures in the higher level | |
863 algorithms such as addition, multiplication and modular exponentiation. | |
864 | |
865 \subsection{Augmenting an mp\_int's Precision} | |
866 When storing a value in an mp\_int structure, a sufficient number of digits must be available to accomodate the entire | |
867 result of an operation without loss of precision. Quite often the size of the array given by the \textbf{alloc} member | |
868 is large enough to simply increase the \textbf{used} digit count. However, when the size of the array is too small it | |
869 must be re-sized appropriately to accomodate the result. The mp\_grow algorithm will provide this functionality. | |
870 | |
871 \newpage\begin{figure}[here] | |
872 \begin{center} | |
873 \begin{tabular}{l} | |
874 \hline Algorithm \textbf{mp\_grow}. \\ | |
875 \textbf{Input}. An mp\_int $a$ and an integer $b$. \\ | |
876 \textbf{Output}. $a$ is expanded to accomodate $b$ digits. \\ | |
877 \hline \\ | |
878 1. if $a.alloc \ge b$ then return(\textit{MP\_OKAY}) \\ | |
879 2. $u \leftarrow b\mbox{ (mod }MP\_PREC\mbox{)}$ \\ | |
880 3. $v \leftarrow b + 2 \cdot MP\_PREC - u$ \\ | |
142 | 881 4. Re-allocate the array of digits $a$ to size $v$ \\ |
19 | 882 5. If the allocation failed then return(\textit{MP\_MEM}). \\ |
883 6. for n from a.alloc to $v - 1$ do \\ | |
884 \hspace{+3mm}6.1 $a_n \leftarrow 0$ \\ | |
885 7. $a.alloc \leftarrow v$ \\ | |
886 8. Return(\textit{MP\_OKAY}) \\ | |
887 \hline | |
888 \end{tabular} | |
889 \end{center} | |
890 \caption{Algorithm mp\_grow} | |
891 \end{figure} | |
892 | |
893 \textbf{Algorithm mp\_grow.} | |
894 It is ideal to prevent re-allocations from being performed if they are not required (step one). This is useful to | |
895 prevent mp\_ints from growing excessively in code that erroneously calls mp\_grow. | |
896 | |
897 The requested digit count is padded up to next multiple of \textbf{MP\_PREC} plus an additional \textbf{MP\_PREC} (steps two and three). | |
898 This helps prevent many trivial reallocations that would grow an mp\_int by trivially small values. | |
899 | |
900 It is assumed that the reallocation (step four) leaves the lower $a.alloc$ digits of the mp\_int intact. This is much | |
901 akin to how the \textit{realloc} function from the standard C library works. Since the newly allocated digits are | |
902 assumed to contain undefined values they are initially set to zero. | |
903 | |
904 EXAM,bn_mp_grow.c | |
905 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
906 A quick optimization is to first determine if a memory re-allocation is required at all. The if statement (line @24,alloc@) checks |
142 | 907 if the \textbf{alloc} member of the mp\_int is smaller than the requested digit count. If the count is not larger than \textbf{alloc} |
908 the function skips the re-allocation part thus saving time. | |
909 | |
910 When a re-allocation is performed it is turned into an optimal request to save time in the future. The requested digit count is | |
911 padded upwards to 2nd multiple of \textbf{MP\_PREC} larger than \textbf{alloc} (line @25, size@). The XREALLOC function is used | |
912 to re-allocate the memory. As per the other functions XREALLOC is actually a macro which evaluates to realloc by default. The realloc | |
913 function leaves the base of the allocation intact which means the first \textbf{alloc} digits of the mp\_int are the same as before | |
914 the re-allocation. All that is left is to clear the newly allocated digits and return. | |
915 | |
916 Note that the re-allocation result is actually stored in a temporary pointer $tmp$. This is to allow this function to return | |
917 an error with a valid pointer. Earlier releases of the library stored the result of XREALLOC into the mp\_int $a$. That would | |
918 result in a memory leak if XREALLOC ever failed. | |
19 | 919 |
920 \subsection{Initializing Variable Precision mp\_ints} | |
921 Occasionally the number of digits required will be known in advance of an initialization, based on, for example, the size | |
922 of input mp\_ints to a given algorithm. The purpose of algorithm mp\_init\_size is similar to mp\_init except that it | |
923 will allocate \textit{at least} a specified number of digits. | |
924 | |
925 \begin{figure}[here] | |
926 \begin{small} | |
927 \begin{center} | |
928 \begin{tabular}{l} | |
929 \hline Algorithm \textbf{mp\_init\_size}. \\ | |
930 \textbf{Input}. An mp\_int $a$ and the requested number of digits $b$. \\ | |
931 \textbf{Output}. $a$ is initialized to hold at least $b$ digits. \\ | |
932 \hline \\ | |
933 1. $u \leftarrow b \mbox{ (mod }MP\_PREC\mbox{)}$ \\ | |
934 2. $v \leftarrow b + 2 \cdot MP\_PREC - u$ \\ | |
935 3. Allocate $v$ digits. \\ | |
936 4. for $n$ from $0$ to $v - 1$ do \\ | |
937 \hspace{3mm}4.1 $a_n \leftarrow 0$ \\ | |
938 5. $a.sign \leftarrow MP\_ZPOS$\\ | |
939 6. $a.used \leftarrow 0$\\ | |
940 7. $a.alloc \leftarrow v$\\ | |
941 8. Return(\textit{MP\_OKAY})\\ | |
942 \hline | |
943 \end{tabular} | |
944 \end{center} | |
945 \end{small} | |
946 \caption{Algorithm mp\_init\_size} | |
947 \end{figure} | |
948 | |
949 \textbf{Algorithm mp\_init\_size.} | |
950 This algorithm will initialize an mp\_int structure $a$ like algorithm mp\_init with the exception that the number of | |
951 digits allocated can be controlled by the second input argument $b$. The input size is padded upwards so it is a | |
952 multiple of \textbf{MP\_PREC} plus an additional \textbf{MP\_PREC} digits. This padding is used to prevent trivial | |
953 allocations from becoming a bottleneck in the rest of the algorithms. | |
954 | |
955 Like algorithm mp\_init, the mp\_int structure is initialized to a default state representing the integer zero. This | |
956 particular algorithm is useful if it is known ahead of time the approximate size of the input. If the approximation is | |
957 correct no further memory re-allocations are required to work with the mp\_int. | |
958 | |
959 EXAM,bn_mp_init_size.c | |
960 | |
961 The number of digits $b$ requested is padded (line @22,MP_PREC@) by first augmenting it to the next multiple of | |
962 \textbf{MP\_PREC} and then adding \textbf{MP\_PREC} to the result. If the memory can be successfully allocated the | |
963 mp\_int is placed in a default state representing the integer zero. Otherwise, the error code \textbf{MP\_MEM} will be | |
964 returned (line @27,return@). | |
965 | |
142 | 966 The digits are allocated and set to zero at the same time with the calloc() function (line @25,XCALLOC@). The |
19 | 967 \textbf{used} count is set to zero, the \textbf{alloc} count set to the padded digit count and the \textbf{sign} flag set |
968 to \textbf{MP\_ZPOS} to achieve a default valid mp\_int state (lines @29,used@, @30,alloc@ and @31,sign@). If the function | |
969 returns succesfully then it is correct to assume that the mp\_int structure is in a valid state for the remainder of the | |
970 functions to work with. | |
971 | |
972 \subsection{Multiple Integer Initializations and Clearings} | |
973 Occasionally a function will require a series of mp\_int data types to be made available simultaneously. | |
974 The purpose of algorithm mp\_init\_multi is to initialize a variable length array of mp\_int structures in a single | |
975 statement. It is essentially a shortcut to multiple initializations. | |
976 | |
977 \newpage\begin{figure}[here] | |
978 \begin{center} | |
979 \begin{tabular}{l} | |
980 \hline Algorithm \textbf{mp\_init\_multi}. \\ | |
981 \textbf{Input}. Variable length array $V_k$ of mp\_int variables of length $k$. \\ | |
982 \textbf{Output}. The array is initialized such that each mp\_int of $V_k$ is ready to use. \\ | |
983 \hline \\ | |
984 1. for $n$ from 0 to $k - 1$ do \\ | |
985 \hspace{+3mm}1.1. Initialize the mp\_int $V_n$ (\textit{mp\_init}) \\ | |
986 \hspace{+3mm}1.2. If initialization failed then do \\ | |
987 \hspace{+6mm}1.2.1. for $j$ from $0$ to $n$ do \\ | |
988 \hspace{+9mm}1.2.1.1. Free the mp\_int $V_j$ (\textit{mp\_clear}) \\ | |
989 \hspace{+6mm}1.2.2. Return(\textit{MP\_MEM}) \\ | |
990 2. Return(\textit{MP\_OKAY}) \\ | |
991 \hline | |
992 \end{tabular} | |
993 \end{center} | |
994 \caption{Algorithm mp\_init\_multi} | |
995 \end{figure} | |
996 | |
997 \textbf{Algorithm mp\_init\_multi.} | |
998 The algorithm will initialize the array of mp\_int variables one at a time. If a runtime error has been detected | |
999 (\textit{step 1.2}) all of the previously initialized variables are cleared. The goal is an ``all or nothing'' | |
1000 initialization which allows for quick recovery from runtime errors. | |
1001 | |
1002 EXAM,bn_mp_init_multi.c | |
1003 | |
1004 This function intializes a variable length list of mp\_int structure pointers. However, instead of having the mp\_int | |
1005 structures in an actual C array they are simply passed as arguments to the function. This function makes use of the | |
1006 ``...'' argument syntax of the C programming language. The list is terminated with a final \textbf{NULL} argument | |
1007 appended on the right. | |
1008 | |
1009 The function uses the ``stdarg.h'' \textit{va} functions to step portably through the arguments to the function. A count | |
1010 $n$ of succesfully initialized mp\_int structures is maintained (line @47,n++@) such that if a failure does occur, | |
1011 the algorithm can backtrack and free the previously initialized structures (lines @27,if@ to @46,}@). | |
1012 | |
1013 | |
1014 \subsection{Clamping Excess Digits} | |
1015 When a function anticipates a result will be $n$ digits it is simpler to assume this is true within the body of | |
1016 the function instead of checking during the computation. For example, a multiplication of a $i$ digit number by a | |
1017 $j$ digit produces a result of at most $i + j$ digits. It is entirely possible that the result is $i + j - 1$ | |
1018 though, with no final carry into the last position. However, suppose the destination had to be first expanded | |
1019 (\textit{via mp\_grow}) to accomodate $i + j - 1$ digits than further expanded to accomodate the final carry. | |
1020 That would be a considerable waste of time since heap operations are relatively slow. | |
1021 | |
1022 The ideal solution is to always assume the result is $i + j$ and fix up the \textbf{used} count after the function | |
1023 terminates. This way a single heap operation (\textit{at most}) is required. However, if the result was not checked | |
1024 there would be an excess high order zero digit. | |
1025 | |
1026 For example, suppose the product of two integers was $x_n = (0x_{n-1}x_{n-2}...x_0)_{\beta}$. The leading zero digit | |
1027 will not contribute to the precision of the result. In fact, through subsequent operations more leading zero digits would | |
1028 accumulate to the point the size of the integer would be prohibitive. As a result even though the precision is very | |
1029 low the representation is excessively large. | |
1030 | |
1031 The mp\_clamp algorithm is designed to solve this very problem. It will trim high-order zeros by decrementing the | |
1032 \textbf{used} count until a non-zero most significant digit is found. Also in this system, zero is considered to be a | |
1033 positive number which means that if the \textbf{used} count is decremented to zero, the sign must be set to | |
1034 \textbf{MP\_ZPOS}. | |
1035 | |
1036 \begin{figure}[here] | |
1037 \begin{center} | |
1038 \begin{tabular}{l} | |
1039 \hline Algorithm \textbf{mp\_clamp}. \\ | |
1040 \textbf{Input}. An mp\_int $a$ \\ | |
1041 \textbf{Output}. Any excess leading zero digits of $a$ are removed \\ | |
1042 \hline \\ | |
1043 1. while $a.used > 0$ and $a_{a.used - 1} = 0$ do \\ | |
1044 \hspace{+3mm}1.1 $a.used \leftarrow a.used - 1$ \\ | |
1045 2. if $a.used = 0$ then do \\ | |
1046 \hspace{+3mm}2.1 $a.sign \leftarrow MP\_ZPOS$ \\ | |
1047 \hline \\ | |
1048 \end{tabular} | |
1049 \end{center} | |
1050 \caption{Algorithm mp\_clamp} | |
1051 \end{figure} | |
1052 | |
1053 \textbf{Algorithm mp\_clamp.} | |
1054 As can be expected this algorithm is very simple. The loop on step one is expected to iterate only once or twice at | |
1055 the most. For example, this will happen in cases where there is not a carry to fill the last position. Step two fixes the sign for | |
1056 when all of the digits are zero to ensure that the mp\_int is valid at all times. | |
1057 | |
1058 EXAM,bn_mp_clamp.c | |
1059 | |
1060 Note on line @27,while@ how to test for the \textbf{used} count is made on the left of the \&\& operator. In the C programming | |
1061 language the terms to \&\& are evaluated left to right with a boolean short-circuit if any condition fails. This is | |
1062 important since if the \textbf{used} is zero the test on the right would fetch below the array. That is obviously | |
1063 undesirable. The parenthesis on line @28,a->used@ is used to make sure the \textbf{used} count is decremented and not | |
1064 the pointer ``a''. | |
1065 | |
1066 \section*{Exercises} | |
1067 \begin{tabular}{cl} | |
1068 $\left [ 1 \right ]$ & Discuss the relevance of the \textbf{used} member of the mp\_int structure. \\ | |
1069 & \\ | |
1070 $\left [ 1 \right ]$ & Discuss the consequences of not using padding when performing allocations. \\ | |
1071 & \\ | |
1072 $\left [ 2 \right ]$ & Estimate an ideal value for \textbf{MP\_PREC} when performing 1024-bit RSA \\ | |
1073 & encryption when $\beta = 2^{28}$. \\ | |
1074 & \\ | |
1075 $\left [ 1 \right ]$ & Discuss the relevance of the algorithm mp\_clamp. What does it prevent? \\ | |
1076 & \\ | |
1077 $\left [ 1 \right ]$ & Give an example of when the algorithm mp\_init\_copy might be useful. \\ | |
1078 & \\ | |
1079 \end{tabular} | |
1080 | |
1081 | |
1082 %%% | |
1083 % CHAPTER FOUR | |
1084 %%% | |
1085 | |
1086 \chapter{Basic Operations} | |
1087 | |
1088 \section{Introduction} | |
1089 In the previous chapter a series of low level algorithms were established that dealt with initializing and maintaining | |
1090 mp\_int structures. This chapter will discuss another set of seemingly non-algebraic algorithms which will form the low | |
1091 level basis of the entire library. While these algorithm are relatively trivial it is important to understand how they | |
1092 work before proceeding since these algorithms will be used almost intrinsically in the following chapters. | |
1093 | |
1094 The algorithms in this chapter deal primarily with more ``programmer'' related tasks such as creating copies of | |
1095 mp\_int structures, assigning small values to mp\_int structures and comparisons of the values mp\_int structures | |
1096 represent. | |
1097 | |
1098 \section{Assigning Values to mp\_int Structures} | |
1099 \subsection{Copying an mp\_int} | |
1100 Assigning the value that a given mp\_int structure represents to another mp\_int structure shall be known as making | |
1101 a copy for the purposes of this text. The copy of the mp\_int will be a separate entity that represents the same | |
1102 value as the mp\_int it was copied from. The mp\_copy algorithm provides this functionality. | |
1103 | |
1104 \newpage\begin{figure}[here] | |
1105 \begin{center} | |
1106 \begin{tabular}{l} | |
1107 \hline Algorithm \textbf{mp\_copy}. \\ | |
1108 \textbf{Input}. An mp\_int $a$ and $b$. \\ | |
1109 \textbf{Output}. Store a copy of $a$ in $b$. \\ | |
1110 \hline \\ | |
1111 1. If $b.alloc < a.used$ then grow $b$ to $a.used$ digits. (\textit{mp\_grow}) \\ | |
1112 2. for $n$ from 0 to $a.used - 1$ do \\ | |
1113 \hspace{3mm}2.1 $b_{n} \leftarrow a_{n}$ \\ | |
1114 3. for $n$ from $a.used$ to $b.used - 1$ do \\ | |
1115 \hspace{3mm}3.1 $b_{n} \leftarrow 0$ \\ | |
1116 4. $b.used \leftarrow a.used$ \\ | |
1117 5. $b.sign \leftarrow a.sign$ \\ | |
1118 6. return(\textit{MP\_OKAY}) \\ | |
1119 \hline | |
1120 \end{tabular} | |
1121 \end{center} | |
1122 \caption{Algorithm mp\_copy} | |
1123 \end{figure} | |
1124 | |
1125 \textbf{Algorithm mp\_copy.} | |
1126 This algorithm copies the mp\_int $a$ such that upon succesful termination of the algorithm the mp\_int $b$ will | |
1127 represent the same integer as the mp\_int $a$. The mp\_int $b$ shall be a complete and distinct copy of the | |
1128 mp\_int $a$ meaing that the mp\_int $a$ can be modified and it shall not affect the value of the mp\_int $b$. | |
1129 | |
1130 If $b$ does not have enough room for the digits of $a$ it must first have its precision augmented via the mp\_grow | |
1131 algorithm. The digits of $a$ are copied over the digits of $b$ and any excess digits of $b$ are set to zero (step two | |
1132 and three). The \textbf{used} and \textbf{sign} members of $a$ are finally copied over the respective members of | |
1133 $b$. | |
1134 | |
1135 \textbf{Remark.} This algorithm also introduces a new idiosyncrasy that will be used throughout the rest of the | |
1136 text. The error return codes of other algorithms are not explicitly checked in the pseudo-code presented. For example, in | |
1137 step one of the mp\_copy algorithm the return of mp\_grow is not explicitly checked to ensure it succeeded. Text space is | |
1138 limited so it is assumed that if a algorithm fails it will clear all temporarily allocated mp\_ints and return | |
1139 the error code itself. However, the C code presented will demonstrate all of the error handling logic required to | |
1140 implement the pseudo-code. | |
1141 | |
1142 EXAM,bn_mp_copy.c | |
1143 | |
1144 Occasionally a dependent algorithm may copy an mp\_int effectively into itself such as when the input and output | |
1145 mp\_int structures passed to a function are one and the same. For this case it is optimal to return immediately without | |
1146 copying digits (line @24,a == b@). | |
1147 | |
1148 The mp\_int $b$ must have enough digits to accomodate the used digits of the mp\_int $a$. If $b.alloc$ is less than | |
1149 $a.used$ the algorithm mp\_grow is used to augment the precision of $b$ (lines @29,alloc@ to @33,}@). In order to | |
1150 simplify the inner loop that copies the digits from $a$ to $b$, two aliases $tmpa$ and $tmpb$ point directly at the digits | |
1151 of the mp\_ints $a$ and $b$ respectively. These aliases (lines @42,tmpa@ and @45,tmpb@) allow the compiler to access the digits without first dereferencing the | |
1152 mp\_int pointers and then subsequently the pointer to the digits. | |
1153 | |
1154 After the aliases are established the digits from $a$ are copied into $b$ (lines @48,for@ to @50,}@) and then the excess | |
1155 digits of $b$ are set to zero (lines @53,for@ to @55,}@). Both ``for'' loops make use of the pointer aliases and in | |
1156 fact the alias for $b$ is carried through into the second ``for'' loop to clear the excess digits. This optimization | |
1157 allows the alias to stay in a machine register fairly easy between the two loops. | |
1158 | |
1159 \textbf{Remarks.} The use of pointer aliases is an implementation methodology first introduced in this function that will | |
1160 be used considerably in other functions. Technically, a pointer alias is simply a short hand alias used to lower the | |
1161 number of pointer dereferencing operations required to access data. For example, a for loop may resemble | |
1162 | |
1163 \begin{alltt} | |
1164 for (x = 0; x < 100; x++) \{ | |
1165 a->num[4]->dp[x] = 0; | |
1166 \} | |
1167 \end{alltt} | |
1168 | |
1169 This could be re-written using aliases as | |
1170 | |
1171 \begin{alltt} | |
1172 mp_digit *tmpa; | |
1173 a = a->num[4]->dp; | |
1174 for (x = 0; x < 100; x++) \{ | |
1175 *a++ = 0; | |
1176 \} | |
1177 \end{alltt} | |
1178 | |
1179 In this case an alias is used to access the | |
1180 array of digits within an mp\_int structure directly. It may seem that a pointer alias is strictly not required | |
1181 as a compiler may optimize out the redundant pointer operations. However, there are two dominant reasons to use aliases. | |
1182 | |
1183 The first reason is that most compilers will not effectively optimize pointer arithmetic. For example, some optimizations | |
1184 may work for the Microsoft Visual C++ compiler (MSVC) and not for the GNU C Compiler (GCC). Also some optimizations may | |
1185 work for GCC and not MSVC. As such it is ideal to find a common ground for as many compilers as possible. Pointer | |
1186 aliases optimize the code considerably before the compiler even reads the source code which means the end compiled code | |
1187 stands a better chance of being faster. | |
1188 | |
1189 The second reason is that pointer aliases often can make an algorithm simpler to read. Consider the first ``for'' | |
1190 loop of the function mp\_copy() re-written to not use pointer aliases. | |
1191 | |
1192 \begin{alltt} | |
1193 /* copy all the digits */ | |
1194 for (n = 0; n < a->used; n++) \{ | |
1195 b->dp[n] = a->dp[n]; | |
1196 \} | |
1197 \end{alltt} | |
1198 | |
1199 Whether this code is harder to read depends strongly on the individual. However, it is quantifiably slightly more | |
1200 complicated as there are four variables within the statement instead of just two. | |
1201 | |
1202 \subsubsection{Nested Statements} | |
1203 Another commonly used technique in the source routines is that certain sections of code are nested. This is used in | |
1204 particular with the pointer aliases to highlight code phases. For example, a Comba multiplier (discussed in chapter six) | |
1205 will typically have three different phases. First the temporaries are initialized, then the columns calculated and | |
1206 finally the carries are propagated. In this example the middle column production phase will typically be nested as it | |
1207 uses temporary variables and aliases the most. | |
1208 | |
1209 The nesting also simplies the source code as variables that are nested are only valid for their scope. As a result | |
1210 the various temporary variables required do not propagate into other sections of code. | |
1211 | |
1212 | |
1213 \subsection{Creating a Clone} | |
1214 Another common operation is to make a local temporary copy of an mp\_int argument. To initialize an mp\_int | |
1215 and then copy another existing mp\_int into the newly intialized mp\_int will be known as creating a clone. This is | |
1216 useful within functions that need to modify an argument but do not wish to actually modify the original copy. The | |
1217 mp\_init\_copy algorithm has been designed to help perform this task. | |
1218 | |
1219 \begin{figure}[here] | |
1220 \begin{center} | |
1221 \begin{tabular}{l} | |
1222 \hline Algorithm \textbf{mp\_init\_copy}. \\ | |
1223 \textbf{Input}. An mp\_int $a$ and $b$\\ | |
1224 \textbf{Output}. $a$ is initialized to be a copy of $b$. \\ | |
1225 \hline \\ | |
1226 1. Init $a$. (\textit{mp\_init}) \\ | |
1227 2. Copy $b$ to $a$. (\textit{mp\_copy}) \\ | |
1228 3. Return the status of the copy operation. \\ | |
1229 \hline | |
1230 \end{tabular} | |
1231 \end{center} | |
1232 \caption{Algorithm mp\_init\_copy} | |
1233 \end{figure} | |
1234 | |
1235 \textbf{Algorithm mp\_init\_copy.} | |
1236 This algorithm will initialize an mp\_int variable and copy another previously initialized mp\_int variable into it. As | |
1237 such this algorithm will perform two operations in one step. | |
1238 | |
1239 EXAM,bn_mp_init_copy.c | |
1240 | |
1241 This will initialize \textbf{a} and make it a verbatim copy of the contents of \textbf{b}. Note that | |
1242 \textbf{a} will have its own memory allocated which means that \textbf{b} may be cleared after the call | |
1243 and \textbf{a} will be left intact. | |
1244 | |
1245 \section{Zeroing an Integer} | |
1246 Reseting an mp\_int to the default state is a common step in many algorithms. The mp\_zero algorithm will be the algorithm used to | |
1247 perform this task. | |
1248 | |
1249 \begin{figure}[here] | |
1250 \begin{center} | |
1251 \begin{tabular}{l} | |
1252 \hline Algorithm \textbf{mp\_zero}. \\ | |
1253 \textbf{Input}. An mp\_int $a$ \\ | |
1254 \textbf{Output}. Zero the contents of $a$ \\ | |
1255 \hline \\ | |
1256 1. $a.used \leftarrow 0$ \\ | |
1257 2. $a.sign \leftarrow$ MP\_ZPOS \\ | |
1258 3. for $n$ from 0 to $a.alloc - 1$ do \\ | |
1259 \hspace{3mm}3.1 $a_n \leftarrow 0$ \\ | |
1260 \hline | |
1261 \end{tabular} | |
1262 \end{center} | |
1263 \caption{Algorithm mp\_zero} | |
1264 \end{figure} | |
1265 | |
1266 \textbf{Algorithm mp\_zero.} | |
1267 This algorithm simply resets a mp\_int to the default state. | |
1268 | |
1269 EXAM,bn_mp_zero.c | |
1270 | |
1271 After the function is completed, all of the digits are zeroed, the \textbf{used} count is zeroed and the | |
1272 \textbf{sign} variable is set to \textbf{MP\_ZPOS}. | |
1273 | |
1274 \section{Sign Manipulation} | |
1275 \subsection{Absolute Value} | |
1276 With the mp\_int representation of an integer, calculating the absolute value is trivial. The mp\_abs algorithm will compute | |
1277 the absolute value of an mp\_int. | |
1278 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1279 \begin{figure}[here] |
19 | 1280 \begin{center} |
1281 \begin{tabular}{l} | |
1282 \hline Algorithm \textbf{mp\_abs}. \\ | |
1283 \textbf{Input}. An mp\_int $a$ \\ | |
1284 \textbf{Output}. Computes $b = \vert a \vert$ \\ | |
1285 \hline \\ | |
1286 1. Copy $a$ to $b$. (\textit{mp\_copy}) \\ | |
1287 2. If the copy failed return(\textit{MP\_MEM}). \\ | |
1288 3. $b.sign \leftarrow MP\_ZPOS$ \\ | |
1289 4. Return(\textit{MP\_OKAY}) \\ | |
1290 \hline | |
1291 \end{tabular} | |
1292 \end{center} | |
1293 \caption{Algorithm mp\_abs} | |
1294 \end{figure} | |
1295 | |
1296 \textbf{Algorithm mp\_abs.} | |
1297 This algorithm computes the absolute of an mp\_int input. First it copies $a$ over $b$. This is an example of an | |
1298 algorithm where the check in mp\_copy that determines if the source and destination are equal proves useful. This allows, | |
1299 for instance, the developer to pass the same mp\_int as the source and destination to this function without addition | |
1300 logic to handle it. | |
1301 | |
1302 EXAM,bn_mp_abs.c | |
1303 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1304 This fairly trivial algorithm first eliminates non--required duplications (line @27,a != b@) and then sets the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1305 \textbf{sign} flag to \textbf{MP\_ZPOS}. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1306 |
19 | 1307 \subsection{Integer Negation} |
1308 With the mp\_int representation of an integer, calculating the negation is also trivial. The mp\_neg algorithm will compute | |
1309 the negative of an mp\_int input. | |
1310 | |
1311 \begin{figure}[here] | |
1312 \begin{center} | |
1313 \begin{tabular}{l} | |
1314 \hline Algorithm \textbf{mp\_neg}. \\ | |
1315 \textbf{Input}. An mp\_int $a$ \\ | |
1316 \textbf{Output}. Computes $b = -a$ \\ | |
1317 \hline \\ | |
1318 1. Copy $a$ to $b$. (\textit{mp\_copy}) \\ | |
1319 2. If the copy failed return(\textit{MP\_MEM}). \\ | |
1320 3. If $a.used = 0$ then return(\textit{MP\_OKAY}). \\ | |
1321 4. If $a.sign = MP\_ZPOS$ then do \\ | |
1322 \hspace{3mm}4.1 $b.sign = MP\_NEG$. \\ | |
1323 5. else do \\ | |
1324 \hspace{3mm}5.1 $b.sign = MP\_ZPOS$. \\ | |
1325 6. Return(\textit{MP\_OKAY}) \\ | |
1326 \hline | |
1327 \end{tabular} | |
1328 \end{center} | |
1329 \caption{Algorithm mp\_neg} | |
1330 \end{figure} | |
1331 | |
1332 \textbf{Algorithm mp\_neg.} | |
1333 This algorithm computes the negation of an input. First it copies $a$ over $b$. If $a$ has no used digits then | |
1334 the algorithm returns immediately. Otherwise it flips the sign flag and stores the result in $b$. Note that if | |
1335 $a$ had no digits then it must be positive by definition. Had step three been omitted then the algorithm would return | |
1336 zero as negative. | |
1337 | |
1338 EXAM,bn_mp_neg.c | |
1339 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1340 Like mp\_abs() this function avoids non--required duplications (line @21,a != b@) and then sets the sign. We |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1341 have to make sure that only non--zero values get a \textbf{sign} of \textbf{MP\_NEG}. If the mp\_int is zero |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1342 than the \textbf{sign} is hard--coded to \textbf{MP\_ZPOS}. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1343 |
19 | 1344 \section{Small Constants} |
1345 \subsection{Setting Small Constants} | |
1346 Often a mp\_int must be set to a relatively small value such as $1$ or $2$. For these cases the mp\_set algorithm is useful. | |
1347 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1348 \newpage\begin{figure}[here] |
19 | 1349 \begin{center} |
1350 \begin{tabular}{l} | |
1351 \hline Algorithm \textbf{mp\_set}. \\ | |
1352 \textbf{Input}. An mp\_int $a$ and a digit $b$ \\ | |
1353 \textbf{Output}. Make $a$ equivalent to $b$ \\ | |
1354 \hline \\ | |
1355 1. Zero $a$ (\textit{mp\_zero}). \\ | |
1356 2. $a_0 \leftarrow b \mbox{ (mod }\beta\mbox{)}$ \\ | |
1357 3. $a.used \leftarrow \left \lbrace \begin{array}{ll} | |
1358 1 & \mbox{if }a_0 > 0 \\ | |
1359 0 & \mbox{if }a_0 = 0 | |
1360 \end{array} \right .$ \\ | |
1361 \hline | |
1362 \end{tabular} | |
1363 \end{center} | |
1364 \caption{Algorithm mp\_set} | |
1365 \end{figure} | |
1366 | |
1367 \textbf{Algorithm mp\_set.} | |
1368 This algorithm sets a mp\_int to a small single digit value. Step number 1 ensures that the integer is reset to the default state. The | |
1369 single digit is set (\textit{modulo $\beta$}) and the \textbf{used} count is adjusted accordingly. | |
1370 | |
1371 EXAM,bn_mp_set.c | |
1372 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1373 First we zero (line @21,mp_zero@) the mp\_int to make sure that the other members are initialized for a |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1374 small positive constant. mp\_zero() ensures that the \textbf{sign} is positive and the \textbf{used} count |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1375 is zero. Next we set the digit and reduce it modulo $\beta$ (line @22,MP_MASK@). After this step we have to |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1376 check if the resulting digit is zero or not. If it is not then we set the \textbf{used} count to one, otherwise |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1377 to zero. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1378 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1379 We can quickly reduce modulo $\beta$ since it is of the form $2^k$ and a quick binary AND operation with |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1380 $2^k - 1$ will perform the same operation. |
19 | 1381 |
1382 One important limitation of this function is that it will only set one digit. The size of a digit is not fixed, meaning source that uses | |
1383 this function should take that into account. Only trivially small constants can be set using this function. | |
1384 | |
1385 \subsection{Setting Large Constants} | |
1386 To overcome the limitations of the mp\_set algorithm the mp\_set\_int algorithm is ideal. It accepts a ``long'' | |
1387 data type as input and will always treat it as a 32-bit integer. | |
1388 | |
1389 \begin{figure}[here] | |
1390 \begin{center} | |
1391 \begin{tabular}{l} | |
1392 \hline Algorithm \textbf{mp\_set\_int}. \\ | |
1393 \textbf{Input}. An mp\_int $a$ and a ``long'' integer $b$ \\ | |
1394 \textbf{Output}. Make $a$ equivalent to $b$ \\ | |
1395 \hline \\ | |
1396 1. Zero $a$ (\textit{mp\_zero}) \\ | |
1397 2. for $n$ from 0 to 7 do \\ | |
1398 \hspace{3mm}2.1 $a \leftarrow a \cdot 16$ (\textit{mp\_mul2d}) \\ | |
1399 \hspace{3mm}2.2 $u \leftarrow \lfloor b / 2^{4(7 - n)} \rfloor \mbox{ (mod }16\mbox{)}$\\ | |
1400 \hspace{3mm}2.3 $a_0 \leftarrow a_0 + u$ \\ | |
1401 \hspace{3mm}2.4 $a.used \leftarrow a.used + 1$ \\ | |
1402 3. Clamp excess used digits (\textit{mp\_clamp}) \\ | |
1403 \hline | |
1404 \end{tabular} | |
1405 \end{center} | |
1406 \caption{Algorithm mp\_set\_int} | |
1407 \end{figure} | |
1408 | |
1409 \textbf{Algorithm mp\_set\_int.} | |
1410 The algorithm performs eight iterations of a simple loop where in each iteration four bits from the source are added to the | |
1411 mp\_int. Step 2.1 will multiply the current result by sixteen making room for four more bits in the less significant positions. In step 2.2 the | |
1412 next four bits from the source are extracted and are added to the mp\_int. The \textbf{used} digit count is | |
1413 incremented to reflect the addition. The \textbf{used} digit counter is incremented since if any of the leading digits were zero the mp\_int would have | |
1414 zero digits used and the newly added four bits would be ignored. | |
1415 | |
1416 Excess zero digits are trimmed in steps 2.1 and 3 by using higher level algorithms mp\_mul2d and mp\_clamp. | |
1417 | |
1418 EXAM,bn_mp_set_int.c | |
1419 | |
1420 This function sets four bits of the number at a time to handle all practical \textbf{DIGIT\_BIT} sizes. The weird | |
1421 addition on line @38,a->used@ ensures that the newly added in bits are added to the number of digits. While it may not | |
1422 seem obvious as to why the digit counter does not grow exceedingly large it is because of the shift on line @27,mp_mul_2d@ | |
1423 as well as the call to mp\_clamp() on line @40,mp_clamp@. Both functions will clamp excess leading digits which keeps | |
1424 the number of used digits low. | |
1425 | |
1426 \section{Comparisons} | |
1427 \subsection{Unsigned Comparisions} | |
1428 Comparing a multiple precision integer is performed with the exact same algorithm used to compare two decimal numbers. For example, | |
1429 to compare $1,234$ to $1,264$ the digits are extracted by their positions. That is we compare $1 \cdot 10^3 + 2 \cdot 10^2 + 3 \cdot 10^1 + 4 \cdot 10^0$ | |
1430 to $1 \cdot 10^3 + 2 \cdot 10^2 + 6 \cdot 10^1 + 4 \cdot 10^0$ by comparing single digits at a time starting with the highest magnitude | |
1431 positions. If any leading digit of one integer is greater than a digit in the same position of another integer then obviously it must be greater. | |
1432 | |
1433 The first comparision routine that will be developed is the unsigned magnitude compare which will perform a comparison based on the digits of two | |
1434 mp\_int variables alone. It will ignore the sign of the two inputs. Such a function is useful when an absolute comparison is required or if the | |
1435 signs are known to agree in advance. | |
1436 | |
1437 To facilitate working with the results of the comparison functions three constants are required. | |
1438 | |
1439 \begin{figure}[here] | |
1440 \begin{center} | |
1441 \begin{tabular}{|r|l|} | |
1442 \hline \textbf{Constant} & \textbf{Meaning} \\ | |
1443 \hline \textbf{MP\_GT} & Greater Than \\ | |
1444 \hline \textbf{MP\_EQ} & Equal To \\ | |
1445 \hline \textbf{MP\_LT} & Less Than \\ | |
1446 \hline | |
1447 \end{tabular} | |
1448 \end{center} | |
1449 \caption{Comparison Return Codes} | |
1450 \end{figure} | |
1451 | |
1452 \begin{figure}[here] | |
1453 \begin{center} | |
1454 \begin{tabular}{l} | |
1455 \hline Algorithm \textbf{mp\_cmp\_mag}. \\ | |
1456 \textbf{Input}. Two mp\_ints $a$ and $b$. \\ | |
1457 \textbf{Output}. Unsigned comparison results ($a$ to the left of $b$). \\ | |
1458 \hline \\ | |
1459 1. If $a.used > b.used$ then return(\textit{MP\_GT}) \\ | |
1460 2. If $a.used < b.used$ then return(\textit{MP\_LT}) \\ | |
1461 3. for n from $a.used - 1$ to 0 do \\ | |
1462 \hspace{+3mm}3.1 if $a_n > b_n$ then return(\textit{MP\_GT}) \\ | |
1463 \hspace{+3mm}3.2 if $a_n < b_n$ then return(\textit{MP\_LT}) \\ | |
1464 4. Return(\textit{MP\_EQ}) \\ | |
1465 \hline | |
1466 \end{tabular} | |
1467 \end{center} | |
1468 \caption{Algorithm mp\_cmp\_mag} | |
1469 \end{figure} | |
1470 | |
1471 \textbf{Algorithm mp\_cmp\_mag.} | |
1472 By saying ``$a$ to the left of $b$'' it is meant that the comparison is with respect to $a$, that is if $a$ is greater than $b$ it will return | |
1473 \textbf{MP\_GT} and similar with respect to when $a = b$ and $a < b$. The first two steps compare the number of digits used in both $a$ and $b$. | |
1474 Obviously if the digit counts differ there would be an imaginary zero digit in the smaller number where the leading digit of the larger number is. | |
1475 If both have the same number of digits than the actual digits themselves must be compared starting at the leading digit. | |
1476 | |
1477 By step three both inputs must have the same number of digits so its safe to start from either $a.used - 1$ or $b.used - 1$ and count down to | |
1478 the zero'th digit. If after all of the digits have been compared, no difference is found, the algorithm returns \textbf{MP\_EQ}. | |
1479 | |
1480 EXAM,bn_mp_cmp_mag.c | |
1481 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1482 The two if statements (lines @24,if@ and @28,if@) compare the number of digits in the two inputs. These two are |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1483 performed before all of the digits are compared since it is a very cheap test to perform and can potentially save |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1484 considerable time. The implementation given is also not valid without those two statements. $b.alloc$ may be |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1485 smaller than $a.used$, meaning that undefined values will be read from $b$ past the end of the array of digits. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1486 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1487 |
19 | 1488 |
1489 \subsection{Signed Comparisons} | |
1490 Comparing with sign considerations is also fairly critical in several routines (\textit{division for example}). Based on an unsigned magnitude | |
1491 comparison a trivial signed comparison algorithm can be written. | |
1492 | |
1493 \begin{figure}[here] | |
1494 \begin{center} | |
1495 \begin{tabular}{l} | |
1496 \hline Algorithm \textbf{mp\_cmp}. \\ | |
1497 \textbf{Input}. Two mp\_ints $a$ and $b$ \\ | |
1498 \textbf{Output}. Signed Comparison Results ($a$ to the left of $b$) \\ | |
1499 \hline \\ | |
1500 1. if $a.sign = MP\_NEG$ and $b.sign = MP\_ZPOS$ then return(\textit{MP\_LT}) \\ | |
1501 2. if $a.sign = MP\_ZPOS$ and $b.sign = MP\_NEG$ then return(\textit{MP\_GT}) \\ | |
1502 3. if $a.sign = MP\_NEG$ then \\ | |
1503 \hspace{+3mm}3.1 Return the unsigned comparison of $b$ and $a$ (\textit{mp\_cmp\_mag}) \\ | |
1504 4 Otherwise \\ | |
1505 \hspace{+3mm}4.1 Return the unsigned comparison of $a$ and $b$ \\ | |
1506 \hline | |
1507 \end{tabular} | |
1508 \end{center} | |
1509 \caption{Algorithm mp\_cmp} | |
1510 \end{figure} | |
1511 | |
1512 \textbf{Algorithm mp\_cmp.} | |
1513 The first two steps compare the signs of the two inputs. If the signs do not agree then it can return right away with the appropriate | |
1514 comparison code. When the signs are equal the digits of the inputs must be compared to determine the correct result. In step | |
1515 three the unsigned comparision flips the order of the arguments since they are both negative. For instance, if $-a > -b$ then | |
1516 $\vert a \vert < \vert b \vert$. Step number four will compare the two when they are both positive. | |
1517 | |
1518 EXAM,bn_mp_cmp.c | |
1519 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1520 The two if statements (lines @22,if@ and @26,if@) perform the initial sign comparison. If the signs are not the equal then which ever |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1521 has the positive sign is larger. The inputs are compared (line @30,if@) based on magnitudes. If the signs were both |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1522 negative then the unsigned comparison is performed in the opposite direction (line @31,mp_cmp_mag@). Otherwise, the signs are assumed to |
19 | 1523 be both positive and a forward direction unsigned comparison is performed. |
1524 | |
1525 \section*{Exercises} | |
1526 \begin{tabular}{cl} | |
1527 $\left [ 2 \right ]$ & Modify algorithm mp\_set\_int to accept as input a variable length array of bits. \\ | |
1528 & \\ | |
1529 $\left [ 3 \right ]$ & Give the probability that algorithm mp\_cmp\_mag will have to compare $k$ digits \\ | |
1530 & of two random digits (of equal magnitude) before a difference is found. \\ | |
1531 & \\ | |
1532 $\left [ 1 \right ]$ & Suggest a simple method to speed up the implementation of mp\_cmp\_mag based \\ | |
1533 & on the observations made in the previous problem. \\ | |
1534 & | |
1535 \end{tabular} | |
1536 | |
1537 \chapter{Basic Arithmetic} | |
1538 \section{Introduction} | |
1539 At this point algorithms for initialization, clearing, zeroing, copying, comparing and setting small constants have been | |
1540 established. The next logical set of algorithms to develop are addition, subtraction and digit shifting algorithms. These | |
1541 algorithms make use of the lower level algorithms and are the cruicial building block for the multiplication algorithms. It is very important | |
1542 that these algorithms are highly optimized. On their own they are simple $O(n)$ algorithms but they can be called from higher level algorithms | |
1543 which easily places them at $O(n^2)$ or even $O(n^3)$ work levels. | |
1544 | |
1545 MARK,SHIFTS | |
1546 All of the algorithms within this chapter make use of the logical bit shift operations denoted by $<<$ and $>>$ for left and right | |
1547 logical shifts respectively. A logical shift is analogous to sliding the decimal point of radix-10 representations. For example, the real | |
1548 number $0.9345$ is equivalent to $93.45\%$ which is found by sliding the the decimal two places to the right (\textit{multiplying by $\beta^2 = 10^2$}). | |
1549 Algebraically a binary logical shift is equivalent to a division or multiplication by a power of two. | |
1550 For example, $a << k = a \cdot 2^k$ while $a >> k = \lfloor a/2^k \rfloor$. | |
1551 | |
1552 One significant difference between a logical shift and the way decimals are shifted is that digits below the zero'th position are removed | |
1553 from the number. For example, consider $1101_2 >> 1$ using decimal notation this would produce $110.1_2$. However, with a logical shift the | |
1554 result is $110_2$. | |
1555 | |
1556 \section{Addition and Subtraction} | |
1557 In common twos complement fixed precision arithmetic negative numbers are easily represented by subtraction from the modulus. For example, with 32-bit integers | |
1558 $a - b\mbox{ (mod }2^{32}\mbox{)}$ is the same as $a + (2^{32} - b) \mbox{ (mod }2^{32}\mbox{)}$ since $2^{32} \equiv 0 \mbox{ (mod }2^{32}\mbox{)}$. | |
1559 As a result subtraction can be performed with a trivial series of logical operations and an addition. | |
1560 | |
1561 However, in multiple precision arithmetic negative numbers are not represented in the same way. Instead a sign flag is used to keep track of the | |
1562 sign of the integer. As a result signed addition and subtraction are actually implemented as conditional usage of lower level addition or | |
1563 subtraction algorithms with the sign fixed up appropriately. | |
1564 | |
1565 The lower level algorithms will add or subtract integers without regard to the sign flag. That is they will add or subtract the magnitude of | |
1566 the integers respectively. | |
1567 | |
1568 \subsection{Low Level Addition} | |
1569 An unsigned addition of multiple precision integers is performed with the same long-hand algorithm used to add decimal numbers. That is to add the | |
1570 trailing digits first and propagate the resulting carry upwards. Since this is a lower level algorithm the name will have a ``s\_'' prefix. | |
1571 Historically that convention stems from the MPI library where ``s\_'' stood for static functions that were hidden from the developer entirely. | |
1572 | |
1573 \newpage | |
1574 \begin{figure}[!here] | |
1575 \begin{center} | |
1576 \begin{small} | |
1577 \begin{tabular}{l} | |
1578 \hline Algorithm \textbf{s\_mp\_add}. \\ | |
1579 \textbf{Input}. Two mp\_ints $a$ and $b$ \\ | |
1580 \textbf{Output}. The unsigned addition $c = \vert a \vert + \vert b \vert$. \\ | |
1581 \hline \\ | |
1582 1. if $a.used > b.used$ then \\ | |
1583 \hspace{+3mm}1.1 $min \leftarrow b.used$ \\ | |
1584 \hspace{+3mm}1.2 $max \leftarrow a.used$ \\ | |
1585 \hspace{+3mm}1.3 $x \leftarrow a$ \\ | |
1586 2. else \\ | |
1587 \hspace{+3mm}2.1 $min \leftarrow a.used$ \\ | |
1588 \hspace{+3mm}2.2 $max \leftarrow b.used$ \\ | |
1589 \hspace{+3mm}2.3 $x \leftarrow b$ \\ | |
1590 3. If $c.alloc < max + 1$ then grow $c$ to hold at least $max + 1$ digits (\textit{mp\_grow}) \\ | |
1591 4. $oldused \leftarrow c.used$ \\ | |
1592 5. $c.used \leftarrow max + 1$ \\ | |
1593 6. $u \leftarrow 0$ \\ | |
1594 7. for $n$ from $0$ to $min - 1$ do \\ | |
1595 \hspace{+3mm}7.1 $c_n \leftarrow a_n + b_n + u$ \\ | |
1596 \hspace{+3mm}7.2 $u \leftarrow c_n >> lg(\beta)$ \\ | |
1597 \hspace{+3mm}7.3 $c_n \leftarrow c_n \mbox{ (mod }\beta\mbox{)}$ \\ | |
1598 8. if $min \ne max$ then do \\ | |
1599 \hspace{+3mm}8.1 for $n$ from $min$ to $max - 1$ do \\ | |
1600 \hspace{+6mm}8.1.1 $c_n \leftarrow x_n + u$ \\ | |
1601 \hspace{+6mm}8.1.2 $u \leftarrow c_n >> lg(\beta)$ \\ | |
1602 \hspace{+6mm}8.1.3 $c_n \leftarrow c_n \mbox{ (mod }\beta\mbox{)}$ \\ | |
1603 9. $c_{max} \leftarrow u$ \\ | |
1604 10. if $olduse > max$ then \\ | |
1605 \hspace{+3mm}10.1 for $n$ from $max + 1$ to $oldused - 1$ do \\ | |
1606 \hspace{+6mm}10.1.1 $c_n \leftarrow 0$ \\ | |
1607 11. Clamp excess digits in $c$. (\textit{mp\_clamp}) \\ | |
1608 12. Return(\textit{MP\_OKAY}) \\ | |
1609 \hline | |
1610 \end{tabular} | |
1611 \end{small} | |
1612 \end{center} | |
1613 \caption{Algorithm s\_mp\_add} | |
1614 \end{figure} | |
1615 | |
1616 \textbf{Algorithm s\_mp\_add.} | |
1617 This algorithm is loosely based on algorithm 14.7 of HAC \cite[pp. 594]{HAC} but has been extended to allow the inputs to have different magnitudes. | |
1618 Coincidentally the description of algorithm A in Knuth \cite[pp. 266]{TAOCPV2} shares the same deficiency as the algorithm from \cite{HAC}. Even the | |
1619 MIX pseudo machine code presented by Knuth \cite[pp. 266-267]{TAOCPV2} is incapable of handling inputs which are of different magnitudes. | |
1620 | |
1621 The first thing that has to be accomplished is to sort out which of the two inputs is the largest. The addition logic | |
1622 will simply add all of the smallest input to the largest input and store that first part of the result in the | |
1623 destination. Then it will apply a simpler addition loop to excess digits of the larger input. | |
1624 | |
1625 The first two steps will handle sorting the inputs such that $min$ and $max$ hold the digit counts of the two | |
1626 inputs. The variable $x$ will be an mp\_int alias for the largest input or the second input $b$ if they have the | |
1627 same number of digits. After the inputs are sorted the destination $c$ is grown as required to accomodate the sum | |
1628 of the two inputs. The original \textbf{used} count of $c$ is copied and set to the new used count. | |
1629 | |
1630 At this point the first addition loop will go through as many digit positions that both inputs have. The carry | |
1631 variable $\mu$ is set to zero outside the loop. Inside the loop an ``addition'' step requires three statements to produce | |
1632 one digit of the summand. First | |
1633 two digits from $a$ and $b$ are added together along with the carry $\mu$. The carry of this step is extracted and stored | |
1634 in $\mu$ and finally the digit of the result $c_n$ is truncated within the range $0 \le c_n < \beta$. | |
1635 | |
1636 Now all of the digit positions that both inputs have in common have been exhausted. If $min \ne max$ then $x$ is an alias | |
1637 for one of the inputs that has more digits. A simplified addition loop is then used to essentially copy the remaining digits | |
1638 and the carry to the destination. | |
1639 | |
1640 The final carry is stored in $c_{max}$ and digits above $max$ upto $oldused$ are zeroed which completes the addition. | |
1641 | |
1642 | |
1643 EXAM,bn_s_mp_add.c | |
1644 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1645 We first sort (lines @27,if@ to @35,}@) the inputs based on magnitude and determine the $min$ and $max$ variables. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1646 Note that $x$ is a pointer to an mp\_int assigned to the largest input, in effect it is a local alias. Next we |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1647 grow the destination (@37,init@ to @42,}@) ensure that it can accomodate the result of the addition. |
19 | 1648 |
1649 Similar to the implementation of mp\_copy this function uses the braced code and local aliases coding style. The three aliases that are on | |
1650 lines @56,tmpa@, @59,tmpb@ and @62,tmpc@ represent the two inputs and destination variables respectively. These aliases are used to ensure the | |
1651 compiler does not have to dereference $a$, $b$ or $c$ (respectively) to access the digits of the respective mp\_int. | |
1652 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1653 The initial carry $u$ will be cleared (line @65,u = 0@), note that $u$ is of type mp\_digit which ensures type |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1654 compatibility within the implementation. The initial addition (line @66,for@ to @75,}@) adds digits from |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1655 both inputs until the smallest input runs out of digits. Similarly the conditional addition loop |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1656 (line @81,for@ to @90,}@) adds the remaining digits from the larger of the two inputs. The addition is finished |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1657 with the final carry being stored in $tmpc$ (line @94,tmpc++@). Note the ``++'' operator within the same expression. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1658 After line @94,tmpc++@, $tmpc$ will point to the $c.used$'th digit of the mp\_int $c$. This is useful |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1659 for the next loop (line @97,for@ to @99,}@) which set any old upper digits to zero. |
19 | 1660 |
1661 \subsection{Low Level Subtraction} | |
1662 The low level unsigned subtraction algorithm is very similar to the low level unsigned addition algorithm. The principle difference is that the | |
1663 unsigned subtraction algorithm requires the result to be positive. That is when computing $a - b$ the condition $\vert a \vert \ge \vert b\vert$ must | |
1664 be met for this algorithm to function properly. Keep in mind this low level algorithm is not meant to be used in higher level algorithms directly. | |
1665 This algorithm as will be shown can be used to create functional signed addition and subtraction algorithms. | |
1666 | |
1667 MARK,GAMMA | |
1668 | |
1669 For this algorithm a new variable is required to make the description simpler. Recall from section 1.3.1 that a mp\_digit must be able to represent | |
1670 the range $0 \le x < 2\beta$ for the algorithms to work correctly. However, it is allowable that a mp\_digit represent a larger range of values. For | |
1671 this algorithm we will assume that the variable $\gamma$ represents the number of bits available in a | |
1672 mp\_digit (\textit{this implies $2^{\gamma} > \beta$}). | |
1673 | |
1674 For example, the default for LibTomMath is to use a ``unsigned long'' for the mp\_digit ``type'' while $\beta = 2^{28}$. In ISO C an ``unsigned long'' | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1675 data type must be able to represent $0 \le x < 2^{32}$ meaning that in this case $\gamma \ge 32$. |
19 | 1676 |
1677 \newpage\begin{figure}[!here] | |
1678 \begin{center} | |
1679 \begin{small} | |
1680 \begin{tabular}{l} | |
1681 \hline Algorithm \textbf{s\_mp\_sub}. \\ | |
1682 \textbf{Input}. Two mp\_ints $a$ and $b$ ($\vert a \vert \ge \vert b \vert$) \\ | |
1683 \textbf{Output}. The unsigned subtraction $c = \vert a \vert - \vert b \vert$. \\ | |
1684 \hline \\ | |
1685 1. $min \leftarrow b.used$ \\ | |
1686 2. $max \leftarrow a.used$ \\ | |
1687 3. If $c.alloc < max$ then grow $c$ to hold at least $max$ digits. (\textit{mp\_grow}) \\ | |
1688 4. $oldused \leftarrow c.used$ \\ | |
1689 5. $c.used \leftarrow max$ \\ | |
1690 6. $u \leftarrow 0$ \\ | |
1691 7. for $n$ from $0$ to $min - 1$ do \\ | |
1692 \hspace{3mm}7.1 $c_n \leftarrow a_n - b_n - u$ \\ | |
1693 \hspace{3mm}7.2 $u \leftarrow c_n >> (\gamma - 1)$ \\ | |
1694 \hspace{3mm}7.3 $c_n \leftarrow c_n \mbox{ (mod }\beta\mbox{)}$ \\ | |
1695 8. if $min < max$ then do \\ | |
1696 \hspace{3mm}8.1 for $n$ from $min$ to $max - 1$ do \\ | |
1697 \hspace{6mm}8.1.1 $c_n \leftarrow a_n - u$ \\ | |
1698 \hspace{6mm}8.1.2 $u \leftarrow c_n >> (\gamma - 1)$ \\ | |
1699 \hspace{6mm}8.1.3 $c_n \leftarrow c_n \mbox{ (mod }\beta\mbox{)}$ \\ | |
1700 9. if $oldused > max$ then do \\ | |
1701 \hspace{3mm}9.1 for $n$ from $max$ to $oldused - 1$ do \\ | |
1702 \hspace{6mm}9.1.1 $c_n \leftarrow 0$ \\ | |
1703 10. Clamp excess digits of $c$. (\textit{mp\_clamp}). \\ | |
1704 11. Return(\textit{MP\_OKAY}). \\ | |
1705 \hline | |
1706 \end{tabular} | |
1707 \end{small} | |
1708 \end{center} | |
1709 \caption{Algorithm s\_mp\_sub} | |
1710 \end{figure} | |
1711 | |
1712 \textbf{Algorithm s\_mp\_sub.} | |
1713 This algorithm performs the unsigned subtraction of two mp\_int variables under the restriction that the result must be positive. That is when | |
1714 passing variables $a$ and $b$ the condition that $\vert a \vert \ge \vert b \vert$ must be met for the algorithm to function correctly. This | |
1715 algorithm is loosely based on algorithm 14.9 \cite[pp. 595]{HAC} and is similar to algorithm S in \cite[pp. 267]{TAOCPV2} as well. As was the case | |
1716 of the algorithm s\_mp\_add both other references lack discussion concerning various practical details such as when the inputs differ in magnitude. | |
1717 | |
1718 The initial sorting of the inputs is trivial in this algorithm since $a$ is guaranteed to have at least the same magnitude of $b$. Steps 1 and 2 | |
1719 set the $min$ and $max$ variables. Unlike the addition routine there is guaranteed to be no carry which means that the final result can be at | |
1720 most $max$ digits in length as opposed to $max + 1$. Similar to the addition algorithm the \textbf{used} count of $c$ is copied locally and | |
1721 set to the maximal count for the operation. | |
1722 | |
1723 The subtraction loop that begins on step seven is essentially the same as the addition loop of algorithm s\_mp\_add except single precision | |
1724 subtraction is used instead. Note the use of the $\gamma$ variable to extract the carry (\textit{also known as the borrow}) within the subtraction | |
1725 loops. Under the assumption that two's complement single precision arithmetic is used this will successfully extract the desired carry. | |
1726 | |
1727 For example, consider subtracting $0101_2$ from $0100_2$ where $\gamma = 4$ and $\beta = 2$. The least significant bit will force a carry upwards to | |
1728 the third bit which will be set to zero after the borrow. After the very first bit has been subtracted $4 - 1 \equiv 0011_2$ will remain, When the | |
1729 third bit of $0101_2$ is subtracted from the result it will cause another carry. In this case though the carry will be forced to propagate all the | |
1730 way to the most significant bit. | |
1731 | |
1732 Recall that $\beta < 2^{\gamma}$. This means that if a carry does occur just before the $lg(\beta)$'th bit it will propagate all the way to the most | |
1733 significant bit. Thus, the high order bits of the mp\_digit that are not part of the actual digit will either be all zero, or all one. All that | |
1734 is needed is a single zero or one bit for the carry. Therefore a single logical shift right by $\gamma - 1$ positions is sufficient to extract the | |
1735 carry. This method of carry extraction may seem awkward but the reason for it becomes apparent when the implementation is discussed. | |
1736 | |
1737 If $b$ has a smaller magnitude than $a$ then step 9 will force the carry and copy operation to propagate through the larger input $a$ into $c$. Step | |
1738 10 will ensure that any leading digits of $c$ above the $max$'th position are zeroed. | |
1739 | |
1740 EXAM,bn_s_mp_sub.c | |
1741 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1742 Like low level addition we ``sort'' the inputs. Except in this case the sorting is hardcoded |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1743 (lines @24,min@ and @25,max@). In reality the $min$ and $max$ variables are only aliases and are only |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1744 used to make the source code easier to read. Again the pointer alias optimization is used |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1745 within this algorithm. The aliases $tmpa$, $tmpb$ and $tmpc$ are initialized |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1746 (lines @42,tmpa@, @43,tmpb@ and @44,tmpc@) for $a$, $b$ and $c$ respectively. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1747 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1748 The first subtraction loop (lines @47,u = 0@ through @61,}@) subtract digits from both inputs until the smaller of |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1749 the two inputs has been exhausted. As remarked earlier there is an implementation reason for using the ``awkward'' |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1750 method of extracting the carry (line @57, >>@). The traditional method for extracting the carry would be to shift |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1751 by $lg(\beta)$ positions and logically AND the least significant bit. The AND operation is required because all of |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1752 the bits above the $\lg(\beta)$'th bit will be set to one after a carry occurs from subtraction. This carry |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1753 extraction requires two relatively cheap operations to extract the carry. The other method is to simply shift the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1754 most significant bit to the least significant bit thus extracting the carry with a single cheap operation. This |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1755 optimization only works on twos compliment machines which is a safe assumption to make. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1756 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1757 If $a$ has a larger magnitude than $b$ an additional loop (lines @64,for@ through @73,}@) is required to propagate |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
1758 the carry through $a$ and copy the result to $c$. |
19 | 1759 |
1760 \subsection{High Level Addition} | |
1761 Now that both lower level addition and subtraction algorithms have been established an effective high level signed addition algorithm can be | |
1762 established. This high level addition algorithm will be what other algorithms and developers will use to perform addition of mp\_int data | |
1763 types. | |
1764 | |
1765 Recall from section 5.2 that an mp\_int represents an integer with an unsigned mantissa (\textit{the array of digits}) and a \textbf{sign} | |
1766 flag. A high level addition is actually performed as a series of eight separate cases which can be optimized down to three unique cases. | |
1767 | |
1768 \begin{figure}[!here] | |
1769 \begin{center} | |
1770 \begin{tabular}{l} | |
1771 \hline Algorithm \textbf{mp\_add}. \\ | |
1772 \textbf{Input}. Two mp\_ints $a$ and $b$ \\ | |
1773 \textbf{Output}. The signed addition $c = a + b$. \\ | |
1774 \hline \\ | |
1775 1. if $a.sign = b.sign$ then do \\ | |
1776 \hspace{3mm}1.1 $c.sign \leftarrow a.sign$ \\ | |
1777 \hspace{3mm}1.2 $c \leftarrow \vert a \vert + \vert b \vert$ (\textit{s\_mp\_add})\\ | |
1778 2. else do \\ | |
1779 \hspace{3mm}2.1 if $\vert a \vert < \vert b \vert$ then do (\textit{mp\_cmp\_mag}) \\ | |
1780 \hspace{6mm}2.1.1 $c.sign \leftarrow b.sign$ \\ | |
1781 \hspace{6mm}2.1.2 $c \leftarrow \vert b \vert - \vert a \vert$ (\textit{s\_mp\_sub}) \\ | |
1782 \hspace{3mm}2.2 else do \\ | |
1783 \hspace{6mm}2.2.1 $c.sign \leftarrow a.sign$ \\ | |
1784 \hspace{6mm}2.2.2 $c \leftarrow \vert a \vert - \vert b \vert$ \\ | |
1785 3. Return(\textit{MP\_OKAY}). \\ | |
1786 \hline | |
1787 \end{tabular} | |
1788 \end{center} | |
1789 \caption{Algorithm mp\_add} | |
1790 \end{figure} | |
1791 | |
1792 \textbf{Algorithm mp\_add.} | |
1793 This algorithm performs the signed addition of two mp\_int variables. There is no reference algorithm to draw upon from | |
1794 either \cite{TAOCPV2} or \cite{HAC} since they both only provide unsigned operations. The algorithm is fairly | |
1795 straightforward but restricted since subtraction can only produce positive results. | |
1796 | |
1797 \begin{figure}[here] | |
1798 \begin{small} | |
1799 \begin{center} | |
1800 \begin{tabular}{|c|c|c|c|c|} | |
1801 \hline \textbf{Sign of $a$} & \textbf{Sign of $b$} & \textbf{$\vert a \vert > \vert b \vert $} & \textbf{Unsigned Operation} & \textbf{Result Sign Flag} \\ | |
1802 \hline $+$ & $+$ & Yes & $c = a + b$ & $a.sign$ \\ | |
1803 \hline $+$ & $+$ & No & $c = a + b$ & $a.sign$ \\ | |
1804 \hline $-$ & $-$ & Yes & $c = a + b$ & $a.sign$ \\ | |
1805 \hline $-$ & $-$ & No & $c = a + b$ & $a.sign$ \\ | |
1806 \hline &&&&\\ | |
1807 | |
1808 \hline $+$ & $-$ & No & $c = b - a$ & $b.sign$ \\ | |
1809 \hline $-$ & $+$ & No & $c = b - a$ & $b.sign$ \\ | |
1810 | |
1811 \hline &&&&\\ | |
1812 | |
1813 \hline $+$ & $-$ & Yes & $c = a - b$ & $a.sign$ \\ | |
1814 \hline $-$ & $+$ & Yes & $c = a - b$ & $a.sign$ \\ | |
1815 | |
1816 \hline | |
1817 \end{tabular} | |
1818 \end{center} | |
1819 \end{small} | |
1820 \caption{Addition Guide Chart} | |
1821 \label{fig:AddChart} | |
1822 \end{figure} | |
1823 | |
1824 Figure~\ref{fig:AddChart} lists all of the eight possible input combinations and is sorted to show that only three | |
1825 specific cases need to be handled. The return code of the unsigned operations at step 1.2, 2.1.2 and 2.2.2 are | |
1826 forwarded to step three to check for errors. This simplifies the description of the algorithm considerably and best | |
1827 follows how the implementation actually was achieved. | |
1828 | |
1829 Also note how the \textbf{sign} is set before the unsigned addition or subtraction is performed. Recall from the descriptions of algorithms | |
1830 s\_mp\_add and s\_mp\_sub that the mp\_clamp function is used at the end to trim excess digits. The mp\_clamp algorithm will set the \textbf{sign} | |
1831 to \textbf{MP\_ZPOS} when the \textbf{used} digit count reaches zero. | |
1832 | |
1833 For example, consider performing $-a + a$ with algorithm mp\_add. By the description of the algorithm the sign is set to \textbf{MP\_NEG} which would | |
1834 produce a result of $-0$. However, since the sign is set first then the unsigned addition is performed the subsequent usage of algorithm mp\_clamp | |
1835 within algorithm s\_mp\_add will force $-0$ to become $0$. | |
1836 | |
1837 EXAM,bn_mp_add.c | |
1838 | |
1839 The source code follows the algorithm fairly closely. The most notable new source code addition is the usage of the $res$ integer variable which | |
1840 is used to pass result of the unsigned operations forward. Unlike in the algorithm, the variable $res$ is merely returned as is without | |
1841 explicitly checking it and returning the constant \textbf{MP\_OKAY}. The observation is this algorithm will succeed or fail only if the lower | |
1842 level functions do so. Returning their return code is sufficient. | |
1843 | |
1844 \subsection{High Level Subtraction} | |
1845 The high level signed subtraction algorithm is essentially the same as the high level signed addition algorithm. | |
1846 | |
1847 \newpage\begin{figure}[!here] | |
1848 \begin{center} | |
1849 \begin{tabular}{l} | |
1850 \hline Algorithm \textbf{mp\_sub}. \\ | |
1851 \textbf{Input}. Two mp\_ints $a$ and $b$ \\ | |
1852 \textbf{Output}. The signed subtraction $c = a - b$. \\ | |
1853 \hline \\ | |
1854 1. if $a.sign \ne b.sign$ then do \\ | |
1855 \hspace{3mm}1.1 $c.sign \leftarrow a.sign$ \\ | |
1856 \hspace{3mm}1.2 $c \leftarrow \vert a \vert + \vert b \vert$ (\textit{s\_mp\_add}) \\ | |
1857 2. else do \\ | |
1858 \hspace{3mm}2.1 if $\vert a \vert \ge \vert b \vert$ then do (\textit{mp\_cmp\_mag}) \\ | |
1859 \hspace{6mm}2.1.1 $c.sign \leftarrow a.sign$ \\ | |
1860 \hspace{6mm}2.1.2 $c \leftarrow \vert a \vert - \vert b \vert$ (\textit{s\_mp\_sub}) \\ | |
1861 \hspace{3mm}2.2 else do \\ | |
1862 \hspace{6mm}2.2.1 $c.sign \leftarrow \left \lbrace \begin{array}{ll} | |
1863 MP\_ZPOS & \mbox{if }a.sign = MP\_NEG \\ | |
1864 MP\_NEG & \mbox{otherwise} \\ | |
1865 \end{array} \right .$ \\ | |
1866 \hspace{6mm}2.2.2 $c \leftarrow \vert b \vert - \vert a \vert$ \\ | |
1867 3. Return(\textit{MP\_OKAY}). \\ | |
1868 \hline | |
1869 \end{tabular} | |
1870 \end{center} | |
1871 \caption{Algorithm mp\_sub} | |
1872 \end{figure} | |
1873 | |
1874 \textbf{Algorithm mp\_sub.} | |
1875 This algorithm performs the signed subtraction of two inputs. Similar to algorithm mp\_add there is no reference in either \cite{TAOCPV2} or | |
1876 \cite{HAC}. Also this algorithm is restricted by algorithm s\_mp\_sub. Chart \ref{fig:SubChart} lists the eight possible inputs and | |
1877 the operations required. | |
1878 | |
1879 \begin{figure}[!here] | |
1880 \begin{small} | |
1881 \begin{center} | |
1882 \begin{tabular}{|c|c|c|c|c|} | |
1883 \hline \textbf{Sign of $a$} & \textbf{Sign of $b$} & \textbf{$\vert a \vert \ge \vert b \vert $} & \textbf{Unsigned Operation} & \textbf{Result Sign Flag} \\ | |
1884 \hline $+$ & $-$ & Yes & $c = a + b$ & $a.sign$ \\ | |
1885 \hline $+$ & $-$ & No & $c = a + b$ & $a.sign$ \\ | |
1886 \hline $-$ & $+$ & Yes & $c = a + b$ & $a.sign$ \\ | |
1887 \hline $-$ & $+$ & No & $c = a + b$ & $a.sign$ \\ | |
1888 \hline &&&& \\ | |
1889 \hline $+$ & $+$ & Yes & $c = a - b$ & $a.sign$ \\ | |
1890 \hline $-$ & $-$ & Yes & $c = a - b$ & $a.sign$ \\ | |
1891 \hline &&&& \\ | |
1892 \hline $+$ & $+$ & No & $c = b - a$ & $\mbox{opposite of }a.sign$ \\ | |
1893 \hline $-$ & $-$ & No & $c = b - a$ & $\mbox{opposite of }a.sign$ \\ | |
1894 \hline | |
1895 \end{tabular} | |
1896 \end{center} | |
1897 \end{small} | |
1898 \caption{Subtraction Guide Chart} | |
1899 \label{fig:SubChart} | |
1900 \end{figure} | |
1901 | |
1902 Similar to the case of algorithm mp\_add the \textbf{sign} is set first before the unsigned addition or subtraction. That is to prevent the | |
1903 algorithm from producing $-a - -a = -0$ as a result. | |
1904 | |
1905 EXAM,bn_mp_sub.c | |
1906 | |
1907 Much like the implementation of algorithm mp\_add the variable $res$ is used to catch the return code of the unsigned addition or subtraction operations | |
1908 and forward it to the end of the function. On line @38, != MP_LT@ the ``not equal to'' \textbf{MP\_LT} expression is used to emulate a | |
1909 ``greater than or equal to'' comparison. | |
1910 | |
1911 \section{Bit and Digit Shifting} | |
1912 MARK,POLY | |
1913 It is quite common to think of a multiple precision integer as a polynomial in $x$, that is $y = f(\beta)$ where $f(x) = \sum_{i=0}^{n-1} a_i x^i$. | |
1914 This notation arises within discussion of Montgomery and Diminished Radix Reduction as well as Karatsuba multiplication and squaring. | |
1915 | |
1916 In order to facilitate operations on polynomials in $x$ as above a series of simple ``digit'' algorithms have to be established. That is to shift | |
1917 the digits left or right as well to shift individual bits of the digits left and right. It is important to note that not all ``shift'' operations | |
1918 are on radix-$\beta$ digits. | |
1919 | |
1920 \subsection{Multiplication by Two} | |
1921 | |
1922 In a binary system where the radix is a power of two multiplication by two not only arises often in other algorithms it is a fairly efficient | |
1923 operation to perform. A single precision logical shift left is sufficient to multiply a single digit by two. | |
1924 | |
1925 \newpage\begin{figure}[!here] | |
1926 \begin{small} | |
1927 \begin{center} | |
1928 \begin{tabular}{l} | |
1929 \hline Algorithm \textbf{mp\_mul\_2}. \\ | |
1930 \textbf{Input}. One mp\_int $a$ \\ | |
1931 \textbf{Output}. $b = 2a$. \\ | |
1932 \hline \\ | |
1933 1. If $b.alloc < a.used + 1$ then grow $b$ to hold $a.used + 1$ digits. (\textit{mp\_grow}) \\ | |
1934 2. $oldused \leftarrow b.used$ \\ | |
1935 3. $b.used \leftarrow a.used$ \\ | |
1936 4. $r \leftarrow 0$ \\ | |
1937 5. for $n$ from 0 to $a.used - 1$ do \\ | |
1938 \hspace{3mm}5.1 $rr \leftarrow a_n >> (lg(\beta) - 1)$ \\ | |
1939 \hspace{3mm}5.2 $b_n \leftarrow (a_n << 1) + r \mbox{ (mod }\beta\mbox{)}$ \\ | |
1940 \hspace{3mm}5.3 $r \leftarrow rr$ \\ | |
1941 6. If $r \ne 0$ then do \\ | |
1942 \hspace{3mm}6.1 $b_{n + 1} \leftarrow r$ \\ | |
1943 \hspace{3mm}6.2 $b.used \leftarrow b.used + 1$ \\ | |
1944 7. If $b.used < oldused - 1$ then do \\ | |
1945 \hspace{3mm}7.1 for $n$ from $b.used$ to $oldused - 1$ do \\ | |
1946 \hspace{6mm}7.1.1 $b_n \leftarrow 0$ \\ | |
1947 8. $b.sign \leftarrow a.sign$ \\ | |
1948 9. Return(\textit{MP\_OKAY}).\\ | |
1949 \hline | |
1950 \end{tabular} | |
1951 \end{center} | |
1952 \end{small} | |
1953 \caption{Algorithm mp\_mul\_2} | |
1954 \end{figure} | |
1955 | |
1956 \textbf{Algorithm mp\_mul\_2.} | |
1957 This algorithm will quickly multiply a mp\_int by two provided $\beta$ is a power of two. Neither \cite{TAOCPV2} nor \cite{HAC} describe such | |
1958 an algorithm despite the fact it arises often in other algorithms. The algorithm is setup much like the lower level algorithm s\_mp\_add since | |
1959 it is for all intents and purposes equivalent to the operation $b = \vert a \vert + \vert a \vert$. | |
1960 | |
1961 Step 1 and 2 grow the input as required to accomodate the maximum number of \textbf{used} digits in the result. The initial \textbf{used} count | |
1962 is set to $a.used$ at step 4. Only if there is a final carry will the \textbf{used} count require adjustment. | |
1963 | |
1964 Step 6 is an optimization implementation of the addition loop for this specific case. That is since the two values being added together | |
1965 are the same there is no need to perform two reads from the digits of $a$. Step 6.1 performs a single precision shift on the current digit $a_n$ to | |
1966 obtain what will be the carry for the next iteration. Step 6.2 calculates the $n$'th digit of the result as single precision shift of $a_n$ plus | |
1967 the previous carry. Recall from ~SHIFTS~ that $a_n << 1$ is equivalent to $a_n \cdot 2$. An iteration of the addition loop is finished with | |
1968 forwarding the carry to the next iteration. | |
1969 | |
1970 Step 7 takes care of any final carry by setting the $a.used$'th digit of the result to the carry and augmenting the \textbf{used} count of $b$. | |
1971 Step 8 clears any leading digits of $b$ in case it originally had a larger magnitude than $a$. | |
1972 | |
1973 EXAM,bn_mp_mul_2.c | |
1974 | |
1975 This implementation is essentially an optimized implementation of s\_mp\_add for the case of doubling an input. The only noteworthy difference | |
1976 is the use of the logical shift operator on line @52,<<@ to perform a single precision doubling. | |
1977 | |
1978 \subsection{Division by Two} | |
1979 A division by two can just as easily be accomplished with a logical shift right as multiplication by two can be with a logical shift left. | |
1980 | |
1981 \newpage\begin{figure}[!here] | |
1982 \begin{small} | |
1983 \begin{center} | |
1984 \begin{tabular}{l} | |
1985 \hline Algorithm \textbf{mp\_div\_2}. \\ | |
1986 \textbf{Input}. One mp\_int $a$ \\ | |
1987 \textbf{Output}. $b = a/2$. \\ | |
1988 \hline \\ | |
1989 1. If $b.alloc < a.used$ then grow $b$ to hold $a.used$ digits. (\textit{mp\_grow}) \\ | |
1990 2. If the reallocation failed return(\textit{MP\_MEM}). \\ | |
1991 3. $oldused \leftarrow b.used$ \\ | |
1992 4. $b.used \leftarrow a.used$ \\ | |
1993 5. $r \leftarrow 0$ \\ | |
1994 6. for $n$ from $b.used - 1$ to $0$ do \\ | |
1995 \hspace{3mm}6.1 $rr \leftarrow a_n \mbox{ (mod }2\mbox{)}$\\ | |
1996 \hspace{3mm}6.2 $b_n \leftarrow (a_n >> 1) + (r << (lg(\beta) - 1)) \mbox{ (mod }\beta\mbox{)}$ \\ | |
1997 \hspace{3mm}6.3 $r \leftarrow rr$ \\ | |
1998 7. If $b.used < oldused - 1$ then do \\ | |
1999 \hspace{3mm}7.1 for $n$ from $b.used$ to $oldused - 1$ do \\ | |
2000 \hspace{6mm}7.1.1 $b_n \leftarrow 0$ \\ | |
2001 8. $b.sign \leftarrow a.sign$ \\ | |
2002 9. Clamp excess digits of $b$. (\textit{mp\_clamp}) \\ | |
2003 10. Return(\textit{MP\_OKAY}).\\ | |
2004 \hline | |
2005 \end{tabular} | |
2006 \end{center} | |
2007 \end{small} | |
2008 \caption{Algorithm mp\_div\_2} | |
2009 \end{figure} | |
2010 | |
2011 \textbf{Algorithm mp\_div\_2.} | |
2012 This algorithm will divide an mp\_int by two using logical shifts to the right. Like mp\_mul\_2 it uses a modified low level addition | |
2013 core as the basis of the algorithm. Unlike mp\_mul\_2 the shift operations work from the leading digit to the trailing digit. The algorithm | |
2014 could be written to work from the trailing digit to the leading digit however, it would have to stop one short of $a.used - 1$ digits to prevent | |
2015 reading past the end of the array of digits. | |
2016 | |
2017 Essentially the loop at step 6 is similar to that of mp\_mul\_2 except the logical shifts go in the opposite direction and the carry is at the | |
2018 least significant bit not the most significant bit. | |
2019 | |
2020 EXAM,bn_mp_div_2.c | |
2021 | |
2022 \section{Polynomial Basis Operations} | |
2023 Recall from ~POLY~ that any integer can be represented as a polynomial in $x$ as $y = f(\beta)$. Such a representation is also known as | |
2024 the polynomial basis \cite[pp. 48]{ROSE}. Given such a notation a multiplication or division by $x$ amounts to shifting whole digits a single | |
2025 place. The need for such operations arises in several other higher level algorithms such as Barrett and Montgomery reduction, integer | |
2026 division and Karatsuba multiplication. | |
2027 | |
2028 Converting from an array of digits to polynomial basis is very simple. Consider the integer $y \equiv (a_2, a_1, a_0)_{\beta}$ and recall that | |
2029 $y = \sum_{i=0}^{2} a_i \beta^i$. Simply replace $\beta$ with $x$ and the expression is in polynomial basis. For example, $f(x) = 8x + 9$ is the | |
2030 polynomial basis representation for $89$ using radix ten. That is, $f(10) = 8(10) + 9 = 89$. | |
2031 | |
2032 \subsection{Multiplication by $x$} | |
2033 | |
2034 Given a polynomial in $x$ such as $f(x) = a_n x^n + a_{n-1} x^{n-1} + ... + a_0$ multiplying by $x$ amounts to shifting the coefficients up one | |
2035 degree. In this case $f(x) \cdot x = a_n x^{n+1} + a_{n-1} x^n + ... + a_0 x$. From a scalar basis point of view multiplying by $x$ is equivalent to | |
2036 multiplying by the integer $\beta$. | |
2037 | |
2038 \newpage\begin{figure}[!here] | |
2039 \begin{small} | |
2040 \begin{center} | |
2041 \begin{tabular}{l} | |
2042 \hline Algorithm \textbf{mp\_lshd}. \\ | |
2043 \textbf{Input}. One mp\_int $a$ and an integer $b$ \\ | |
2044 \textbf{Output}. $a \leftarrow a \cdot \beta^b$ (equivalent to multiplication by $x^b$). \\ | |
2045 \hline \\ | |
2046 1. If $b \le 0$ then return(\textit{MP\_OKAY}). \\ | |
2047 2. If $a.alloc < a.used + b$ then grow $a$ to at least $a.used + b$ digits. (\textit{mp\_grow}). \\ | |
2048 3. If the reallocation failed return(\textit{MP\_MEM}). \\ | |
2049 4. $a.used \leftarrow a.used + b$ \\ | |
2050 5. $i \leftarrow a.used - 1$ \\ | |
2051 6. $j \leftarrow a.used - 1 - b$ \\ | |
2052 7. for $n$ from $a.used - 1$ to $b$ do \\ | |
2053 \hspace{3mm}7.1 $a_{i} \leftarrow a_{j}$ \\ | |
2054 \hspace{3mm}7.2 $i \leftarrow i - 1$ \\ | |
2055 \hspace{3mm}7.3 $j \leftarrow j - 1$ \\ | |
2056 8. for $n$ from 0 to $b - 1$ do \\ | |
2057 \hspace{3mm}8.1 $a_n \leftarrow 0$ \\ | |
2058 9. Return(\textit{MP\_OKAY}). \\ | |
2059 \hline | |
2060 \end{tabular} | |
2061 \end{center} | |
2062 \end{small} | |
2063 \caption{Algorithm mp\_lshd} | |
2064 \end{figure} | |
2065 | |
2066 \textbf{Algorithm mp\_lshd.} | |
2067 This algorithm multiplies an mp\_int by the $b$'th power of $x$. This is equivalent to multiplying by $\beta^b$. The algorithm differs | |
2068 from the other algorithms presented so far as it performs the operation in place instead storing the result in a separate location. The | |
2069 motivation behind this change is due to the way this function is typically used. Algorithms such as mp\_add store the result in an optionally | |
2070 different third mp\_int because the original inputs are often still required. Algorithm mp\_lshd (\textit{and similarly algorithm mp\_rshd}) is | |
2071 typically used on values where the original value is no longer required. The algorithm will return success immediately if | |
2072 $b \le 0$ since the rest of algorithm is only valid when $b > 0$. | |
2073 | |
2074 First the destination $a$ is grown as required to accomodate the result. The counters $i$ and $j$ are used to form a \textit{sliding window} over | |
2075 the digits of $a$ of length $b$. The head of the sliding window is at $i$ (\textit{the leading digit}) and the tail at $j$ (\textit{the trailing digit}). | |
2076 The loop on step 7 copies the digit from the tail to the head. In each iteration the window is moved down one digit. The last loop on | |
2077 step 8 sets the lower $b$ digits to zero. | |
2078 | |
2079 \newpage | |
2080 FIGU,sliding_window,Sliding Window Movement | |
2081 | |
2082 EXAM,bn_mp_lshd.c | |
2083 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2084 The if statement (line @24,if@) ensures that the $b$ variable is greater than zero since we do not interpret negative |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2085 shift counts properly. The \textbf{used} count is incremented by $b$ before the copy loop begins. This elminates |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2086 the need for an additional variable in the for loop. The variable $top$ (line @42,top@) is an alias |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2087 for the leading digit while $bottom$ (line @45,bottom@) is an alias for the trailing edge. The aliases form a |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2088 window of exactly $b$ digits over the input. |
19 | 2089 |
2090 \subsection{Division by $x$} | |
2091 | |
2092 Division by powers of $x$ is easily achieved by shifting the digits right and removing any that will end up to the right of the zero'th digit. | |
2093 | |
2094 \newpage\begin{figure}[!here] | |
2095 \begin{small} | |
2096 \begin{center} | |
2097 \begin{tabular}{l} | |
2098 \hline Algorithm \textbf{mp\_rshd}. \\ | |
2099 \textbf{Input}. One mp\_int $a$ and an integer $b$ \\ | |
2100 \textbf{Output}. $a \leftarrow a / \beta^b$ (Divide by $x^b$). \\ | |
2101 \hline \\ | |
2102 1. If $b \le 0$ then return. \\ | |
2103 2. If $a.used \le b$ then do \\ | |
2104 \hspace{3mm}2.1 Zero $a$. (\textit{mp\_zero}). \\ | |
2105 \hspace{3mm}2.2 Return. \\ | |
2106 3. $i \leftarrow 0$ \\ | |
2107 4. $j \leftarrow b$ \\ | |
2108 5. for $n$ from 0 to $a.used - b - 1$ do \\ | |
2109 \hspace{3mm}5.1 $a_i \leftarrow a_j$ \\ | |
2110 \hspace{3mm}5.2 $i \leftarrow i + 1$ \\ | |
2111 \hspace{3mm}5.3 $j \leftarrow j + 1$ \\ | |
2112 6. for $n$ from $a.used - b$ to $a.used - 1$ do \\ | |
2113 \hspace{3mm}6.1 $a_n \leftarrow 0$ \\ | |
2114 7. $a.used \leftarrow a.used - b$ \\ | |
2115 8. Return. \\ | |
2116 \hline | |
2117 \end{tabular} | |
2118 \end{center} | |
2119 \end{small} | |
2120 \caption{Algorithm mp\_rshd} | |
2121 \end{figure} | |
2122 | |
2123 \textbf{Algorithm mp\_rshd.} | |
2124 This algorithm divides the input in place by the $b$'th power of $x$. It is analogous to dividing by a $\beta^b$ but much quicker since | |
2125 it does not require single precision division. This algorithm does not actually return an error code as it cannot fail. | |
2126 | |
2127 If the input $b$ is less than one the algorithm quickly returns without performing any work. If the \textbf{used} count is less than or equal | |
2128 to the shift count $b$ then it will simply zero the input and return. | |
2129 | |
2130 After the trivial cases of inputs have been handled the sliding window is setup. Much like the case of algorithm mp\_lshd a sliding window that | |
2131 is $b$ digits wide is used to copy the digits. Unlike mp\_lshd the window slides in the opposite direction from the trailing to the leading digit. | |
2132 Also the digits are copied from the leading to the trailing edge. | |
2133 | |
2134 Once the window copy is complete the upper digits must be zeroed and the \textbf{used} count decremented. | |
2135 | |
2136 EXAM,bn_mp_rshd.c | |
2137 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2138 The only noteworthy element of this routine is the lack of a return type since it cannot fail. Like mp\_lshd() we |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2139 form a sliding window except we copy in the other direction. After the window (line @59,for (;@) we then zero |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2140 the upper digits of the input to make sure the result is correct. |
19 | 2141 |
2142 \section{Powers of Two} | |
2143 | |
2144 Now that algorithms for moving single bits as well as whole digits exist algorithms for moving the ``in between'' distances are required. For | |
2145 example, to quickly multiply by $2^k$ for any $k$ without using a full multiplier algorithm would prove useful. Instead of performing single | |
2146 shifts $k$ times to achieve a multiplication by $2^{\pm k}$ a mixture of whole digit shifting and partial digit shifting is employed. | |
2147 | |
2148 \subsection{Multiplication by Power of Two} | |
2149 | |
2150 \newpage\begin{figure}[!here] | |
2151 \begin{small} | |
2152 \begin{center} | |
2153 \begin{tabular}{l} | |
2154 \hline Algorithm \textbf{mp\_mul\_2d}. \\ | |
2155 \textbf{Input}. One mp\_int $a$ and an integer $b$ \\ | |
2156 \textbf{Output}. $c \leftarrow a \cdot 2^b$. \\ | |
2157 \hline \\ | |
2158 1. $c \leftarrow a$. (\textit{mp\_copy}) \\ | |
2159 2. If $c.alloc < c.used + \lfloor b / lg(\beta) \rfloor + 2$ then grow $c$ accordingly. \\ | |
2160 3. If the reallocation failed return(\textit{MP\_MEM}). \\ | |
2161 4. If $b \ge lg(\beta)$ then \\ | |
2162 \hspace{3mm}4.1 $c \leftarrow c \cdot \beta^{\lfloor b / lg(\beta) \rfloor}$ (\textit{mp\_lshd}). \\ | |
2163 \hspace{3mm}4.2 If step 4.1 failed return(\textit{MP\_MEM}). \\ | |
2164 5. $d \leftarrow b \mbox{ (mod }lg(\beta)\mbox{)}$ \\ | |
2165 6. If $d \ne 0$ then do \\ | |
2166 \hspace{3mm}6.1 $mask \leftarrow 2^d$ \\ | |
2167 \hspace{3mm}6.2 $r \leftarrow 0$ \\ | |
2168 \hspace{3mm}6.3 for $n$ from $0$ to $c.used - 1$ do \\ | |
2169 \hspace{6mm}6.3.1 $rr \leftarrow c_n >> (lg(\beta) - d) \mbox{ (mod }mask\mbox{)}$ \\ | |
2170 \hspace{6mm}6.3.2 $c_n \leftarrow (c_n << d) + r \mbox{ (mod }\beta\mbox{)}$ \\ | |
2171 \hspace{6mm}6.3.3 $r \leftarrow rr$ \\ | |
2172 \hspace{3mm}6.4 If $r > 0$ then do \\ | |
2173 \hspace{6mm}6.4.1 $c_{c.used} \leftarrow r$ \\ | |
2174 \hspace{6mm}6.4.2 $c.used \leftarrow c.used + 1$ \\ | |
2175 7. Return(\textit{MP\_OKAY}). \\ | |
2176 \hline | |
2177 \end{tabular} | |
2178 \end{center} | |
2179 \end{small} | |
2180 \caption{Algorithm mp\_mul\_2d} | |
2181 \end{figure} | |
2182 | |
2183 \textbf{Algorithm mp\_mul\_2d.} | |
2184 This algorithm multiplies $a$ by $2^b$ and stores the result in $c$. The algorithm uses algorithm mp\_lshd and a derivative of algorithm mp\_mul\_2 to | |
2185 quickly compute the product. | |
2186 | |
2187 First the algorithm will multiply $a$ by $x^{\lfloor b / lg(\beta) \rfloor}$ which will ensure that the remainder multiplicand is less than | |
2188 $\beta$. For example, if $b = 37$ and $\beta = 2^{28}$ then this step will multiply by $x$ leaving a multiplication by $2^{37 - 28} = 2^{9}$ | |
2189 left. | |
2190 | |
2191 After the digits have been shifted appropriately at most $lg(\beta) - 1$ shifts are left to perform. Step 5 calculates the number of remaining shifts | |
2192 required. If it is non-zero a modified shift loop is used to calculate the remaining product. | |
2193 Essentially the loop is a generic version of algorith mp\_mul2 designed to handle any shift count in the range $1 \le x < lg(\beta)$. The $mask$ | |
2194 variable is used to extract the upper $d$ bits to form the carry for the next iteration. | |
2195 | |
2196 This algorithm is loosely measured as a $O(2n)$ algorithm which means that if the input is $n$-digits that it takes $2n$ ``time'' to | |
2197 complete. It is possible to optimize this algorithm down to a $O(n)$ algorithm at a cost of making the algorithm slightly harder to follow. | |
2198 | |
2199 EXAM,bn_mp_mul_2d.c | |
2200 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2201 The shifting is performed in--place which means the first step (line @24,a != c@) is to copy the input to the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2202 destination. We avoid calling mp\_copy() by making sure the mp\_ints are different. The destination then |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2203 has to be grown (line @31,grow@) to accomodate the result. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2204 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2205 If the shift count $b$ is larger than $lg(\beta)$ then a call to mp\_lshd() is used to handle all of the multiples |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2206 of $lg(\beta)$. Leaving only a remaining shift of $lg(\beta) - 1$ or fewer bits left. Inside the actual shift |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2207 loop (lines @45,if@ to @76,}@) we make use of pre--computed values $shift$ and $mask$. These are used to |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2208 extract the carry bit(s) to pass into the next iteration of the loop. The $r$ and $rr$ variables form a |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2209 chain between consecutive iterations to propagate the carry. |
19 | 2210 |
2211 \subsection{Division by Power of Two} | |
2212 | |
2213 \newpage\begin{figure}[!here] | |
2214 \begin{small} | |
2215 \begin{center} | |
2216 \begin{tabular}{l} | |
2217 \hline Algorithm \textbf{mp\_div\_2d}. \\ | |
2218 \textbf{Input}. One mp\_int $a$ and an integer $b$ \\ | |
2219 \textbf{Output}. $c \leftarrow \lfloor a / 2^b \rfloor, d \leftarrow a \mbox{ (mod }2^b\mbox{)}$. \\ | |
2220 \hline \\ | |
2221 1. If $b \le 0$ then do \\ | |
2222 \hspace{3mm}1.1 $c \leftarrow a$ (\textit{mp\_copy}) \\ | |
2223 \hspace{3mm}1.2 $d \leftarrow 0$ (\textit{mp\_zero}) \\ | |
2224 \hspace{3mm}1.3 Return(\textit{MP\_OKAY}). \\ | |
2225 2. $c \leftarrow a$ \\ | |
2226 3. $d \leftarrow a \mbox{ (mod }2^b\mbox{)}$ (\textit{mp\_mod\_2d}) \\ | |
2227 4. If $b \ge lg(\beta)$ then do \\ | |
2228 \hspace{3mm}4.1 $c \leftarrow \lfloor c/\beta^{\lfloor b/lg(\beta) \rfloor} \rfloor$ (\textit{mp\_rshd}). \\ | |
2229 5. $k \leftarrow b \mbox{ (mod }lg(\beta)\mbox{)}$ \\ | |
2230 6. If $k \ne 0$ then do \\ | |
2231 \hspace{3mm}6.1 $mask \leftarrow 2^k$ \\ | |
2232 \hspace{3mm}6.2 $r \leftarrow 0$ \\ | |
2233 \hspace{3mm}6.3 for $n$ from $c.used - 1$ to $0$ do \\ | |
2234 \hspace{6mm}6.3.1 $rr \leftarrow c_n \mbox{ (mod }mask\mbox{)}$ \\ | |
2235 \hspace{6mm}6.3.2 $c_n \leftarrow (c_n >> k) + (r << (lg(\beta) - k))$ \\ | |
2236 \hspace{6mm}6.3.3 $r \leftarrow rr$ \\ | |
2237 7. Clamp excess digits of $c$. (\textit{mp\_clamp}) \\ | |
2238 8. Return(\textit{MP\_OKAY}). \\ | |
2239 \hline | |
2240 \end{tabular} | |
2241 \end{center} | |
2242 \end{small} | |
2243 \caption{Algorithm mp\_div\_2d} | |
2244 \end{figure} | |
2245 | |
2246 \textbf{Algorithm mp\_div\_2d.} | |
2247 This algorithm will divide an input $a$ by $2^b$ and produce the quotient and remainder. The algorithm is designed much like algorithm | |
2248 mp\_mul\_2d by first using whole digit shifts then single precision shifts. This algorithm will also produce the remainder of the division | |
2249 by using algorithm mp\_mod\_2d. | |
2250 | |
2251 EXAM,bn_mp_div_2d.c | |
2252 | |
2253 The implementation of algorithm mp\_div\_2d is slightly different than the algorithm specifies. The remainder $d$ may be optionally | |
2254 ignored by passing \textbf{NULL} as the pointer to the mp\_int variable. The temporary mp\_int variable $t$ is used to hold the | |
2255 result of the remainder operation until the end. This allows $d$ and $a$ to represent the same mp\_int without modifying $a$ before | |
2256 the quotient is obtained. | |
2257 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2258 The remainder of the source code is essentially the same as the source code for mp\_mul\_2d. The only significant difference is |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2259 the direction of the shifts. |
19 | 2260 |
2261 \subsection{Remainder of Division by Power of Two} | |
2262 | |
2263 The last algorithm in the series of polynomial basis power of two algorithms is calculating the remainder of division by $2^b$. This | |
2264 algorithm benefits from the fact that in twos complement arithmetic $a \mbox{ (mod }2^b\mbox{)}$ is the same as $a$ AND $2^b - 1$. | |
2265 | |
2266 \begin{figure}[!here] | |
2267 \begin{small} | |
2268 \begin{center} | |
2269 \begin{tabular}{l} | |
2270 \hline Algorithm \textbf{mp\_mod\_2d}. \\ | |
2271 \textbf{Input}. One mp\_int $a$ and an integer $b$ \\ | |
2272 \textbf{Output}. $c \leftarrow a \mbox{ (mod }2^b\mbox{)}$. \\ | |
2273 \hline \\ | |
2274 1. If $b \le 0$ then do \\ | |
2275 \hspace{3mm}1.1 $c \leftarrow 0$ (\textit{mp\_zero}) \\ | |
2276 \hspace{3mm}1.2 Return(\textit{MP\_OKAY}). \\ | |
2277 2. If $b > a.used \cdot lg(\beta)$ then do \\ | |
2278 \hspace{3mm}2.1 $c \leftarrow a$ (\textit{mp\_copy}) \\ | |
2279 \hspace{3mm}2.2 Return the result of step 2.1. \\ | |
2280 3. $c \leftarrow a$ \\ | |
2281 4. If step 3 failed return(\textit{MP\_MEM}). \\ | |
2282 5. for $n$ from $\lceil b / lg(\beta) \rceil$ to $c.used$ do \\ | |
2283 \hspace{3mm}5.1 $c_n \leftarrow 0$ \\ | |
2284 6. $k \leftarrow b \mbox{ (mod }lg(\beta)\mbox{)}$ \\ | |
2285 7. $c_{\lfloor b / lg(\beta) \rfloor} \leftarrow c_{\lfloor b / lg(\beta) \rfloor} \mbox{ (mod }2^{k}\mbox{)}$. \\ | |
2286 8. Clamp excess digits of $c$. (\textit{mp\_clamp}) \\ | |
2287 9. Return(\textit{MP\_OKAY}). \\ | |
2288 \hline | |
2289 \end{tabular} | |
2290 \end{center} | |
2291 \end{small} | |
2292 \caption{Algorithm mp\_mod\_2d} | |
2293 \end{figure} | |
2294 | |
2295 \textbf{Algorithm mp\_mod\_2d.} | |
2296 This algorithm will quickly calculate the value of $a \mbox{ (mod }2^b\mbox{)}$. First if $b$ is less than or equal to zero the | |
2297 result is set to zero. If $b$ is greater than the number of bits in $a$ then it simply copies $a$ to $c$ and returns. Otherwise, $a$ | |
2298 is copied to $b$, leading digits are removed and the remaining leading digit is trimed to the exact bit count. | |
2299 | |
2300 EXAM,bn_mp_mod_2d.c | |
2301 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2302 We first avoid cases of $b \le 0$ by simply mp\_zero()'ing the destination in such cases. Next if $2^b$ is larger |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2303 than the input we just mp\_copy() the input and return right away. After this point we know we must actually |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2304 perform some work to produce the remainder. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2305 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2306 Recalling that reducing modulo $2^k$ and a binary ``and'' with $2^k - 1$ are numerically equivalent we can quickly reduce |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2307 the number. First we zero any digits above the last digit in $2^b$ (line @41,for@). Next we reduce the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2308 leading digit of both (line @45,&=@) and then mp\_clamp(). |
19 | 2309 |
2310 \section*{Exercises} | |
2311 \begin{tabular}{cl} | |
2312 $\left [ 3 \right ] $ & Devise an algorithm that performs $a \cdot 2^b$ for generic values of $b$ \\ | |
2313 & in $O(n)$ time. \\ | |
2314 &\\ | |
2315 $\left [ 3 \right ] $ & Devise an efficient algorithm to multiply by small low hamming \\ | |
2316 & weight values such as $3$, $5$ and $9$. Extend it to handle all values \\ | |
2317 & upto $64$ with a hamming weight less than three. \\ | |
2318 &\\ | |
2319 $\left [ 2 \right ] $ & Modify the preceding algorithm to handle values of the form \\ | |
2320 & $2^k - 1$ as well. \\ | |
2321 &\\ | |
2322 $\left [ 3 \right ] $ & Using only algorithms mp\_mul\_2, mp\_div\_2 and mp\_add create an \\ | |
2323 & algorithm to multiply two integers in roughly $O(2n^2)$ time for \\ | |
2324 & any $n$-bit input. Note that the time of addition is ignored in the \\ | |
2325 & calculation. \\ | |
2326 & \\ | |
2327 $\left [ 5 \right ] $ & Improve the previous algorithm to have a working time of at most \\ | |
2328 & $O \left (2^{(k-1)}n + \left ({2n^2 \over k} \right ) \right )$ for an appropriate choice of $k$. Again ignore \\ | |
2329 & the cost of addition. \\ | |
2330 & \\ | |
2331 $\left [ 2 \right ] $ & Devise a chart to find optimal values of $k$ for the previous problem \\ | |
2332 & for $n = 64 \ldots 1024$ in steps of $64$. \\ | |
2333 & \\ | |
2334 $\left [ 2 \right ] $ & Using only algorithms mp\_abs and mp\_sub devise another method for \\ | |
2335 & calculating the result of a signed comparison. \\ | |
2336 & | |
2337 \end{tabular} | |
2338 | |
2339 \chapter{Multiplication and Squaring} | |
2340 \section{The Multipliers} | |
2341 For most number theoretic problems including certain public key cryptographic algorithms, the ``multipliers'' form the most important subset of | |
2342 algorithms of any multiple precision integer package. The set of multiplier algorithms include integer multiplication, squaring and modular reduction | |
2343 where in each of the algorithms single precision multiplication is the dominant operation performed. This chapter will discuss integer multiplication | |
2344 and squaring, leaving modular reductions for the subsequent chapter. | |
2345 | |
2346 The importance of the multiplier algorithms is for the most part driven by the fact that certain popular public key algorithms are based on modular | |
2347 exponentiation, that is computing $d \equiv a^b \mbox{ (mod }c\mbox{)}$ for some arbitrary choice of $a$, $b$, $c$ and $d$. During a modular | |
2348 exponentiation the majority\footnote{Roughly speaking a modular exponentiation will spend about 40\% of the time performing modular reductions, | |
2349 35\% of the time performing squaring and 25\% of the time performing multiplications.} of the processor time is spent performing single precision | |
2350 multiplications. | |
2351 | |
2352 For centuries general purpose multiplication has required a lengthly $O(n^2)$ process, whereby each digit of one multiplicand has to be multiplied | |
2353 against every digit of the other multiplicand. Traditional long-hand multiplication is based on this process; while the techniques can differ the | |
2354 overall algorithm used is essentially the same. Only ``recently'' have faster algorithms been studied. First Karatsuba multiplication was discovered in | |
2355 1962. This algorithm can multiply two numbers with considerably fewer single precision multiplications when compared to the long-hand approach. | |
2356 This technique led to the discovery of polynomial basis algorithms (\textit{good reference?}) and subquently Fourier Transform based solutions. | |
2357 | |
2358 \section{Multiplication} | |
2359 \subsection{The Baseline Multiplication} | |
2360 \label{sec:basemult} | |
2361 \index{baseline multiplication} | |
2362 Computing the product of two integers in software can be achieved using a trivial adaptation of the standard $O(n^2)$ long-hand multiplication | |
2363 algorithm that school children are taught. The algorithm is considered an $O(n^2)$ algorithm since for two $n$-digit inputs $n^2$ single precision | |
2364 multiplications are required. More specifically for a $m$ and $n$ digit input $m \cdot n$ single precision multiplications are required. To | |
2365 simplify most discussions, it will be assumed that the inputs have comparable number of digits. | |
2366 | |
2367 The ``baseline multiplication'' algorithm is designed to act as the ``catch-all'' algorithm, only to be used when the faster algorithms cannot be | |
2368 used. This algorithm does not use any particularly interesting optimizations and should ideally be avoided if possible. One important | |
2369 facet of this algorithm, is that it has been modified to only produce a certain amount of output digits as resolution. The importance of this | |
2370 modification will become evident during the discussion of Barrett modular reduction. Recall that for a $n$ and $m$ digit input the product | |
2371 will be at most $n + m$ digits. Therefore, this algorithm can be reduced to a full multiplier by having it produce $n + m$ digits of the product. | |
2372 | |
2373 Recall from ~GAMMA~ the definition of $\gamma$ as the number of bits in the type \textbf{mp\_digit}. We shall now extend the variable set to | |
2374 include $\alpha$ which shall represent the number of bits in the type \textbf{mp\_word}. This implies that $2^{\alpha} > 2 \cdot \beta^2$. The | |
2375 constant $\delta = 2^{\alpha - 2lg(\beta)}$ will represent the maximal weight of any column in a product (\textit{see ~COMBA~ for more information}). | |
2376 | |
2377 \newpage\begin{figure}[!here] | |
2378 \begin{small} | |
2379 \begin{center} | |
2380 \begin{tabular}{l} | |
2381 \hline Algorithm \textbf{s\_mp\_mul\_digs}. \\ | |
2382 \textbf{Input}. mp\_int $a$, mp\_int $b$ and an integer $digs$ \\ | |
2383 \textbf{Output}. $c \leftarrow \vert a \vert \cdot \vert b \vert \mbox{ (mod }\beta^{digs}\mbox{)}$. \\ | |
2384 \hline \\ | |
2385 1. If min$(a.used, b.used) < \delta$ then do \\ | |
2386 \hspace{3mm}1.1 Calculate $c = \vert a \vert \cdot \vert b \vert$ by the Comba method (\textit{see algorithm~\ref{fig:COMBAMULT}}). \\ | |
2387 \hspace{3mm}1.2 Return the result of step 1.1 \\ | |
2388 \\ | |
2389 Allocate and initialize a temporary mp\_int. \\ | |
2390 2. Init $t$ to be of size $digs$ \\ | |
2391 3. If step 2 failed return(\textit{MP\_MEM}). \\ | |
2392 4. $t.used \leftarrow digs$ \\ | |
2393 \\ | |
2394 Compute the product. \\ | |
2395 5. for $ix$ from $0$ to $a.used - 1$ do \\ | |
2396 \hspace{3mm}5.1 $u \leftarrow 0$ \\ | |
2397 \hspace{3mm}5.2 $pb \leftarrow \mbox{min}(b.used, digs - ix)$ \\ | |
2398 \hspace{3mm}5.3 If $pb < 1$ then goto step 6. \\ | |
2399 \hspace{3mm}5.4 for $iy$ from $0$ to $pb - 1$ do \\ | |
2400 \hspace{6mm}5.4.1 $\hat r \leftarrow t_{iy + ix} + a_{ix} \cdot b_{iy} + u$ \\ | |
2401 \hspace{6mm}5.4.2 $t_{iy + ix} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
2402 \hspace{6mm}5.4.3 $u \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
2403 \hspace{3mm}5.5 if $ix + pb < digs$ then do \\ | |
2404 \hspace{6mm}5.5.1 $t_{ix + pb} \leftarrow u$ \\ | |
2405 6. Clamp excess digits of $t$. \\ | |
2406 7. Swap $c$ with $t$ \\ | |
2407 8. Clear $t$ \\ | |
2408 9. Return(\textit{MP\_OKAY}). \\ | |
2409 \hline | |
2410 \end{tabular} | |
2411 \end{center} | |
2412 \end{small} | |
2413 \caption{Algorithm s\_mp\_mul\_digs} | |
2414 \end{figure} | |
2415 | |
2416 \textbf{Algorithm s\_mp\_mul\_digs.} | |
2417 This algorithm computes the unsigned product of two inputs $a$ and $b$, limited to an output precision of $digs$ digits. While it may seem | |
2418 a bit awkward to modify the function from its simple $O(n^2)$ description, the usefulness of partial multipliers will arise in a subsequent | |
2419 algorithm. The algorithm is loosely based on algorithm 14.12 from \cite[pp. 595]{HAC} and is similar to Algorithm M of Knuth \cite[pp. 268]{TAOCPV2}. | |
2420 Algorithm s\_mp\_mul\_digs differs from these cited references since it can produce a variable output precision regardless of the precision of the | |
2421 inputs. | |
2422 | |
2423 The first thing this algorithm checks for is whether a Comba multiplier can be used instead. If the minimum digit count of either | |
2424 input is less than $\delta$, then the Comba method may be used instead. After the Comba method is ruled out, the baseline algorithm begins. A | |
2425 temporary mp\_int variable $t$ is used to hold the intermediate result of the product. This allows the algorithm to be used to | |
2426 compute products when either $a = c$ or $b = c$ without overwriting the inputs. | |
2427 | |
2428 All of step 5 is the infamous $O(n^2)$ multiplication loop slightly modified to only produce upto $digs$ digits of output. The $pb$ variable | |
2429 is given the count of digits to read from $b$ inside the nested loop. If $pb \le 1$ then no more output digits can be produced and the algorithm | |
2430 will exit the loop. The best way to think of the loops are as a series of $pb \times 1$ multiplications. That is, in each pass of the | |
2431 innermost loop $a_{ix}$ is multiplied against $b$ and the result is added (\textit{with an appropriate shift}) to $t$. | |
2432 | |
2433 For example, consider multiplying $576$ by $241$. That is equivalent to computing $10^0(1)(576) + 10^1(4)(576) + 10^2(2)(576)$ which is best | |
2434 visualized in the following table. | |
2435 | |
2436 \begin{figure}[here] | |
2437 \begin{center} | |
2438 \begin{tabular}{|c|c|c|c|c|c|l|} | |
2439 \hline && & 5 & 7 & 6 & \\ | |
2440 \hline $\times$&& & 2 & 4 & 1 & \\ | |
2441 \hline &&&&&&\\ | |
2442 && & 5 & 7 & 6 & $10^0(1)(576)$ \\ | |
2443 &2 & 3 & 6 & 1 & 6 & $10^1(4)(576) + 10^0(1)(576)$ \\ | |
2444 1 & 3 & 8 & 8 & 1 & 6 & $10^2(2)(576) + 10^1(4)(576) + 10^0(1)(576)$ \\ | |
2445 \hline | |
2446 \end{tabular} | |
2447 \end{center} | |
2448 \caption{Long-Hand Multiplication Diagram} | |
2449 \end{figure} | |
2450 | |
2451 Each row of the product is added to the result after being shifted to the left (\textit{multiplied by a power of the radix}) by the appropriate | |
2452 count. That is in pass $ix$ of the inner loop the product is added starting at the $ix$'th digit of the reult. | |
2453 | |
2454 Step 5.4.1 introduces the hat symbol (\textit{e.g. $\hat r$}) which represents a double precision variable. The multiplication on that step | |
2455 is assumed to be a double wide output single precision multiplication. That is, two single precision variables are multiplied to produce a | |
2456 double precision result. The step is somewhat optimized from a long-hand multiplication algorithm because the carry from the addition in step | |
2457 5.4.1 is propagated through the nested loop. If the carry was not propagated immediately it would overflow the single precision digit | |
2458 $t_{ix+iy}$ and the result would be lost. | |
2459 | |
2460 At step 5.5 the nested loop is finished and any carry that was left over should be forwarded. The carry does not have to be added to the $ix+pb$'th | |
2461 digit since that digit is assumed to be zero at this point. However, if $ix + pb \ge digs$ the carry is not set as it would make the result | |
2462 exceed the precision requested. | |
2463 | |
2464 EXAM,bn_s_mp_mul_digs.c | |
2465 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2466 First we determine (line @30,if@) if the Comba method can be used first since it's faster. The conditions for |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2467 sing the Comba routine are that min$(a.used, b.used) < \delta$ and the number of digits of output is less than |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2468 \textbf{MP\_WARRAY}. This new constant is used to control the stack usage in the Comba routines. By default it is |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2469 set to $\delta$ but can be reduced when memory is at a premium. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2470 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2471 If we cannot use the Comba method we proceed to setup the baseline routine. We allocate the the destination mp\_int |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2472 $t$ (line @36,init@) to the exact size of the output to avoid further re--allocations. At this point we now |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2473 begin the $O(n^2)$ loop. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2474 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2475 This implementation of multiplication has the caveat that it can be trimmed to only produce a variable number of |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2476 digits as output. In each iteration of the outer loop the $pb$ variable is set (line @48,MIN@) to the maximum |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2477 number of inner loop iterations. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2478 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2479 Inside the inner loop we calculate $\hat r$ as the mp\_word product of the two mp\_digits and the addition of the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2480 carry from the previous iteration. A particularly important observation is that most modern optimizing |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2481 C compilers (GCC for instance) can recognize that a $N \times N \rightarrow 2N$ multiplication is all that |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2482 is required for the product. In x86 terms for example, this means using the MUL instruction. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2483 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2484 Each digit of the product is stored in turn (line @68,tmpt@) and the carry propagated (line @71,>>@) to the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2485 next iteration. |
19 | 2486 |
2487 \subsection{Faster Multiplication by the ``Comba'' Method} | |
2488 MARK,COMBA | |
2489 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2490 One of the huge drawbacks of the ``baseline'' algorithms is that at the $O(n^2)$ level the carry must be |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2491 computed and propagated upwards. This makes the nested loop very sequential and hard to unroll and implement |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2492 in parallel. The ``Comba'' \cite{COMBA} method is named after little known (\textit{in cryptographic venues}) Paul G. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2493 Comba who described a method of implementing fast multipliers that do not require nested carry fixup operations. As an |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2494 interesting aside it seems that Paul Barrett describes a similar technique in his 1986 paper \cite{BARRETT} written |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2495 five years before. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2496 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2497 At the heart of the Comba technique is once again the long-hand algorithm. Except in this case a slight |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2498 twist is placed on how the columns of the result are produced. In the standard long-hand algorithm rows of products |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2499 are produced then added together to form the final result. In the baseline algorithm the columns are added together |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2500 after each iteration to get the result instantaneously. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2501 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2502 In the Comba algorithm the columns of the result are produced entirely independently of each other. That is at |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2503 the $O(n^2)$ level a simple multiplication and addition step is performed. The carries of the columns are propagated |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2504 after the nested loop to reduce the amount of work requiored. Succintly the first step of the algorithm is to compute |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2505 the product vector $\vec x$ as follows. |
19 | 2506 |
2507 \begin{equation} | |
2508 \vec x_n = \sum_{i+j = n} a_ib_j, \forall n \in \lbrace 0, 1, 2, \ldots, i + j \rbrace | |
2509 \end{equation} | |
2510 | |
2511 Where $\vec x_n$ is the $n'th$ column of the output vector. Consider the following example which computes the vector $\vec x$ for the multiplication | |
2512 of $576$ and $241$. | |
2513 | |
2514 \newpage\begin{figure}[here] | |
2515 \begin{small} | |
2516 \begin{center} | |
2517 \begin{tabular}{|c|c|c|c|c|c|} | |
2518 \hline & & 5 & 7 & 6 & First Input\\ | |
2519 \hline $\times$ & & 2 & 4 & 1 & Second Input\\ | |
2520 \hline & & $1 \cdot 5 = 5$ & $1 \cdot 7 = 7$ & $1 \cdot 6 = 6$ & First pass \\ | |
2521 & $4 \cdot 5 = 20$ & $4 \cdot 7+5=33$ & $4 \cdot 6+7=31$ & 6 & Second pass \\ | |
2522 $2 \cdot 5 = 10$ & $2 \cdot 7 + 20 = 34$ & $2 \cdot 6+33=45$ & 31 & 6 & Third pass \\ | |
2523 \hline 10 & 34 & 45 & 31 & 6 & Final Result \\ | |
2524 \hline | |
2525 \end{tabular} | |
2526 \end{center} | |
2527 \end{small} | |
2528 \caption{Comba Multiplication Diagram} | |
2529 \end{figure} | |
2530 | |
2531 At this point the vector $x = \left < 10, 34, 45, 31, 6 \right >$ is the result of the first step of the Comba multipler. | |
2532 Now the columns must be fixed by propagating the carry upwards. The resultant vector will have one extra dimension over the input vector which is | |
2533 congruent to adding a leading zero digit. | |
2534 | |
2535 \begin{figure}[!here] | |
2536 \begin{small} | |
2537 \begin{center} | |
2538 \begin{tabular}{l} | |
2539 \hline Algorithm \textbf{Comba Fixup}. \\ | |
2540 \textbf{Input}. Vector $\vec x$ of dimension $k$ \\ | |
2541 \textbf{Output}. Vector $\vec x$ such that the carries have been propagated. \\ | |
2542 \hline \\ | |
2543 1. for $n$ from $0$ to $k - 1$ do \\ | |
2544 \hspace{3mm}1.1 $\vec x_{n+1} \leftarrow \vec x_{n+1} + \lfloor \vec x_{n}/\beta \rfloor$ \\ | |
2545 \hspace{3mm}1.2 $\vec x_{n} \leftarrow \vec x_{n} \mbox{ (mod }\beta\mbox{)}$ \\ | |
2546 2. Return($\vec x$). \\ | |
2547 \hline | |
2548 \end{tabular} | |
2549 \end{center} | |
2550 \end{small} | |
2551 \caption{Algorithm Comba Fixup} | |
2552 \end{figure} | |
2553 | |
2554 With that algorithm and $k = 5$ and $\beta = 10$ the following vector is produced $\vec x= \left < 1, 3, 8, 8, 1, 6 \right >$. In this case | |
2555 $241 \cdot 576$ is in fact $138816$ and the procedure succeeded. If the algorithm is correct and as will be demonstrated shortly more | |
2556 efficient than the baseline algorithm why not simply always use this algorithm? | |
2557 | |
2558 \subsubsection{Column Weight.} | |
2559 At the nested $O(n^2)$ level the Comba method adds the product of two single precision variables to each column of the output | |
2560 independently. A serious obstacle is if the carry is lost, due to lack of precision before the algorithm has a chance to fix | |
2561 the carries. For example, in the multiplication of two three-digit numbers the third column of output will be the sum of | |
2562 three single precision multiplications. If the precision of the accumulator for the output digits is less then $3 \cdot (\beta - 1)^2$ then | |
2563 an overflow can occur and the carry information will be lost. For any $m$ and $n$ digit inputs the maximum weight of any column is | |
2564 min$(m, n)$ which is fairly obvious. | |
2565 | |
2566 The maximum number of terms in any column of a product is known as the ``column weight'' and strictly governs when the algorithm can be used. Recall | |
2567 from earlier that a double precision type has $\alpha$ bits of resolution and a single precision digit has $lg(\beta)$ bits of precision. Given these | |
2568 two quantities we must not violate the following | |
2569 | |
2570 \begin{equation} | |
2571 k \cdot \left (\beta - 1 \right )^2 < 2^{\alpha} | |
2572 \end{equation} | |
2573 | |
2574 Which reduces to | |
2575 | |
2576 \begin{equation} | |
2577 k \cdot \left ( \beta^2 - 2\beta + 1 \right ) < 2^{\alpha} | |
2578 \end{equation} | |
2579 | |
2580 Let $\rho = lg(\beta)$ represent the number of bits in a single precision digit. By further re-arrangement of the equation the final solution is | |
2581 found. | |
2582 | |
2583 \begin{equation} | |
2584 k < {{2^{\alpha}} \over {\left (2^{2\rho} - 2^{\rho + 1} + 1 \right )}} | |
2585 \end{equation} | |
2586 | |
2587 The defaults for LibTomMath are $\beta = 2^{28}$ and $\alpha = 2^{64}$ which means that $k$ is bounded by $k < 257$. In this configuration | |
2588 the smaller input may not have more than $256$ digits if the Comba method is to be used. This is quite satisfactory for most applications since | |
2589 $256$ digits would allow for numbers in the range of $0 \le x < 2^{7168}$ which, is much larger than most public key cryptographic algorithms require. | |
2590 | |
2591 \newpage\begin{figure}[!here] | |
2592 \begin{small} | |
2593 \begin{center} | |
2594 \begin{tabular}{l} | |
2595 \hline Algorithm \textbf{fast\_s\_mp\_mul\_digs}. \\ | |
2596 \textbf{Input}. mp\_int $a$, mp\_int $b$ and an integer $digs$ \\ | |
2597 \textbf{Output}. $c \leftarrow \vert a \vert \cdot \vert b \vert \mbox{ (mod }\beta^{digs}\mbox{)}$. \\ | |
2598 \hline \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2599 Place an array of \textbf{MP\_WARRAY} single precision digits named $W$ on the stack. \\ |
19 | 2600 1. If $c.alloc < digs$ then grow $c$ to $digs$ digits. (\textit{mp\_grow}) \\ |
2601 2. If step 1 failed return(\textit{MP\_MEM}).\\ | |
2602 \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2603 3. $pa \leftarrow \mbox{MIN}(digs, a.used + b.used)$ \\ |
19 | 2604 \\ |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2605 4. $\_ \hat W \leftarrow 0$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2606 5. for $ix$ from 0 to $pa - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2607 \hspace{3mm}5.1 $ty \leftarrow \mbox{MIN}(b.used - 1, ix)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2608 \hspace{3mm}5.2 $tx \leftarrow ix - ty$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2609 \hspace{3mm}5.3 $iy \leftarrow \mbox{MIN}(a.used - tx, ty + 1)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2610 \hspace{3mm}5.4 for $iz$ from 0 to $iy - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2611 \hspace{6mm}5.4.1 $\_ \hat W \leftarrow \_ \hat W + a_{tx+iy}b_{ty-iy}$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2612 \hspace{3mm}5.5 $W_{ix} \leftarrow \_ \hat W (\mbox{mod }\beta)$\\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2613 \hspace{3mm}5.6 $\_ \hat W \leftarrow \lfloor \_ \hat W / \beta \rfloor$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2614 6. $W_{pa} \leftarrow \_ \hat W (\mbox{mod }\beta)$ \\ |
19 | 2615 \\ |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2616 7. $oldused \leftarrow c.used$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2617 8. $c.used \leftarrow digs$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2618 9. for $ix$ from $0$ to $pa$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2619 \hspace{3mm}9.1 $c_{ix} \leftarrow W_{ix}$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2620 10. for $ix$ from $pa + 1$ to $oldused - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2621 \hspace{3mm}10.1 $c_{ix} \leftarrow 0$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2622 \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2623 11. Clamp $c$. \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2624 12. Return MP\_OKAY. \\ |
19 | 2625 \hline |
2626 \end{tabular} | |
2627 \end{center} | |
2628 \end{small} | |
2629 \caption{Algorithm fast\_s\_mp\_mul\_digs} | |
2630 \label{fig:COMBAMULT} | |
2631 \end{figure} | |
2632 | |
2633 \textbf{Algorithm fast\_s\_mp\_mul\_digs.} | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2634 This algorithm performs the unsigned multiplication of $a$ and $b$ using the Comba method limited to $digs$ digits of precision. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2635 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2636 The outer loop of this algorithm is more complicated than that of the baseline multiplier. This is because on the inside of the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2637 loop we want to produce one column per pass. This allows the accumulator $\_ \hat W$ to be placed in CPU registers and |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2638 reduce the memory bandwidth to two \textbf{mp\_digit} reads per iteration. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2639 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2640 The $ty$ variable is set to the minimum count of $ix$ or the number of digits in $b$. That way if $a$ has more digits than |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2641 $b$ this will be limited to $b.used - 1$. The $tx$ variable is set to the to the distance past $b.used$ the variable |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2642 $ix$ is. This is used for the immediately subsequent statement where we find $iy$. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2643 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2644 The variable $iy$ is the minimum digits we can read from either $a$ or $b$ before running out. Computing one column at a time |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2645 means we have to scan one integer upwards and the other downwards. $a$ starts at $tx$ and $b$ starts at $ty$. In each |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2646 pass we are producing the $ix$'th output column and we note that $tx + ty = ix$. As we move $tx$ upwards we have to |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2647 move $ty$ downards so the equality remains valid. The $iy$ variable is the number of iterations until |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2648 $tx \ge a.used$ or $ty < 0$ occurs. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2649 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2650 After every inner pass we store the lower half of the accumulator into $W_{ix}$ and then propagate the carry of the accumulator |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2651 into the next round by dividing $\_ \hat W$ by $\beta$. |
19 | 2652 |
2653 To measure the benefits of the Comba method over the baseline method consider the number of operations that are required. If the | |
2654 cost in terms of time of a multiply and addition is $p$ and the cost of a carry propagation is $q$ then a baseline multiplication would require | |
2655 $O \left ((p + q)n^2 \right )$ time to multiply two $n$-digit numbers. The Comba method requires only $O(pn^2 + qn)$ time, however in practice, | |
2656 the speed increase is actually much more. With $O(n)$ space the algorithm can be reduced to $O(pn + qn)$ time by implementing the $n$ multiply | |
2657 and addition operations in the nested loop in parallel. | |
2658 | |
2659 EXAM,bn_fast_s_mp_mul_digs.c | |
2660 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2661 As per the pseudo--code we first calculate $pa$ (line @47,MIN@) as the number of digits to output. Next we begin the outer loop |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2662 to produce the individual columns of the product. We use the two aliases $tmpx$ and $tmpy$ (lines @61,tmpx@, @62,tmpy@) to point |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2663 inside the two multiplicands quickly. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2664 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2665 The inner loop (lines @70,for@ to @72,}@) of this implementation is where the tradeoff come into play. Originally this comba |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2666 implementation was ``row--major'' which means it adds to each of the columns in each pass. After the outer loop it would then fix |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2667 the carries. This was very fast except it had an annoying drawback. You had to read a mp\_word and two mp\_digits and write |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2668 one mp\_word per iteration. On processors such as the Athlon XP and P4 this did not matter much since the cache bandwidth |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2669 is very high and it can keep the ALU fed with data. It did, however, matter on older and embedded cpus where cache is often |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2670 slower and also often doesn't exist. This new algorithm only performs two reads per iteration under the assumption that the |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2671 compiler has aliased $\_ \hat W$ to a CPU register. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2672 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2673 After the inner loop we store the current accumulator in $W$ and shift $\_ \hat W$ (lines @75,W[ix]@, @78,>>@) to forward it as |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2674 a carry for the next pass. After the outer loop we use the final carry (line @82,W[ix]@) as the last digit of the product. |
19 | 2675 |
2676 \subsection{Polynomial Basis Multiplication} | |
2677 To break the $O(n^2)$ barrier in multiplication requires a completely different look at integer multiplication. In the following algorithms | |
2678 the use of polynomial basis representation for two integers $a$ and $b$ as $f(x) = \sum_{i=0}^{n} a_i x^i$ and | |
2679 $g(x) = \sum_{i=0}^{n} b_i x^i$ respectively, is required. In this system both $f(x)$ and $g(x)$ have $n + 1$ terms and are of the $n$'th degree. | |
2680 | |
2681 The product $a \cdot b \equiv f(x)g(x)$ is the polynomial $W(x) = \sum_{i=0}^{2n} w_i x^i$. The coefficients $w_i$ will | |
2682 directly yield the desired product when $\beta$ is substituted for $x$. The direct solution to solve for the $2n + 1$ coefficients | |
2683 requires $O(n^2)$ time and would in practice be slower than the Comba technique. | |
2684 | |
2685 However, numerical analysis theory indicates that only $2n + 1$ distinct points in $W(x)$ are required to determine the values of the $2n + 1$ unknown | |
2686 coefficients. This means by finding $\zeta_y = W(y)$ for $2n + 1$ small values of $y$ the coefficients of $W(x)$ can be found with | |
2687 Gaussian elimination. This technique is also occasionally refered to as the \textit{interpolation technique} (\textit{references please...}) since in | |
2688 effect an interpolation based on $2n + 1$ points will yield a polynomial equivalent to $W(x)$. | |
2689 | |
2690 The coefficients of the polynomial $W(x)$ are unknown which makes finding $W(y)$ for any value of $y$ impossible. However, since | |
2691 $W(x) = f(x)g(x)$ the equivalent $\zeta_y = f(y) g(y)$ can be used in its place. The benefit of this technique stems from the | |
2692 fact that $f(y)$ and $g(y)$ are much smaller than either $a$ or $b$ respectively. As a result finding the $2n + 1$ relations required | |
2693 by multiplying $f(y)g(y)$ involves multiplying integers that are much smaller than either of the inputs. | |
2694 | |
2695 When picking points to gather relations there are always three obvious points to choose, $y = 0, 1$ and $ \infty$. The $\zeta_0$ term | |
2696 is simply the product $W(0) = w_0 = a_0 \cdot b_0$. The $\zeta_1$ term is the product | |
2697 $W(1) = \left (\sum_{i = 0}^{n} a_i \right ) \left (\sum_{i = 0}^{n} b_i \right )$. The third point $\zeta_{\infty}$ is less obvious but rather | |
2698 simple to explain. The $2n + 1$'th coefficient of $W(x)$ is numerically equivalent to the most significant column in an integer multiplication. | |
2699 The point at $\infty$ is used symbolically to represent the most significant column, that is $W(\infty) = w_{2n} = a_nb_n$. Note that the | |
2700 points at $y = 0$ and $\infty$ yield the coefficients $w_0$ and $w_{2n}$ directly. | |
2701 | |
2702 If more points are required they should be of small values and powers of two such as $2^q$ and the related \textit{mirror points} | |
2703 $\left (2^q \right )^{2n} \cdot \zeta_{2^{-q}}$ for small values of $q$. The term ``mirror point'' stems from the fact that | |
2704 $\left (2^q \right )^{2n} \cdot \zeta_{2^{-q}}$ can be calculated in the exact opposite fashion as $\zeta_{2^q}$. For | |
2705 example, when $n = 2$ and $q = 1$ then following two equations are equivalent to the point $\zeta_{2}$ and its mirror. | |
2706 | |
2707 \begin{eqnarray} | |
2708 \zeta_{2} = f(2)g(2) = (4a_2 + 2a_1 + a_0)(4b_2 + 2b_1 + b_0) \nonumber \\ | |
2709 16 \cdot \zeta_{1 \over 2} = 4f({1\over 2}) \cdot 4g({1 \over 2}) = (a_2 + 2a_1 + 4a_0)(b_2 + 2b_1 + 4b_0) | |
2710 \end{eqnarray} | |
2711 | |
2712 Using such points will allow the values of $f(y)$ and $g(y)$ to be independently calculated using only left shifts. For example, when $n = 2$ the | |
2713 polynomial $f(2^q)$ is equal to $2^q((2^qa_2) + a_1) + a_0$. This technique of polynomial representation is known as Horner's method. | |
2714 | |
2715 As a general rule of the algorithm when the inputs are split into $n$ parts each there are $2n - 1$ multiplications. Each multiplication is of | |
2716 multiplicands that have $n$ times fewer digits than the inputs. The asymptotic running time of this algorithm is | |
2717 $O \left ( k^{lg_n(2n - 1)} \right )$ for $k$ digit inputs (\textit{assuming they have the same number of digits}). Figure~\ref{fig:exponent} | |
2718 summarizes the exponents for various values of $n$. | |
2719 | |
2720 \begin{figure} | |
2721 \begin{center} | |
2722 \begin{tabular}{|c|c|c|} | |
2723 \hline \textbf{Split into $n$ Parts} & \textbf{Exponent} & \textbf{Notes}\\ | |
2724 \hline $2$ & $1.584962501$ & This is Karatsuba Multiplication. \\ | |
2725 \hline $3$ & $1.464973520$ & This is Toom-Cook Multiplication. \\ | |
2726 \hline $4$ & $1.403677461$ &\\ | |
2727 \hline $5$ & $1.365212389$ &\\ | |
2728 \hline $10$ & $1.278753601$ &\\ | |
2729 \hline $100$ & $1.149426538$ &\\ | |
2730 \hline $1000$ & $1.100270931$ &\\ | |
2731 \hline $10000$ & $1.075252070$ &\\ | |
2732 \hline | |
2733 \end{tabular} | |
2734 \end{center} | |
2735 \caption{Asymptotic Running Time of Polynomial Basis Multiplication} | |
2736 \label{fig:exponent} | |
2737 \end{figure} | |
2738 | |
2739 At first it may seem like a good idea to choose $n = 1000$ since the exponent is approximately $1.1$. However, the overhead | |
2740 of solving for the 2001 terms of $W(x)$ will certainly consume any savings the algorithm could offer for all but exceedingly large | |
2741 numbers. | |
2742 | |
2743 \subsubsection{Cutoff Point} | |
2744 The polynomial basis multiplication algorithms all require fewer single precision multiplications than a straight Comba approach. However, | |
2745 the algorithms incur an overhead (\textit{at the $O(n)$ work level}) since they require a system of equations to be solved. This makes the | |
2746 polynomial basis approach more costly to use with small inputs. | |
2747 | |
2748 Let $m$ represent the number of digits in the multiplicands (\textit{assume both multiplicands have the same number of digits}). There exists a | |
2749 point $y$ such that when $m < y$ the polynomial basis algorithms are more costly than Comba, when $m = y$ they are roughly the same cost and | |
2750 when $m > y$ the Comba methods are slower than the polynomial basis algorithms. | |
2751 | |
2752 The exact location of $y$ depends on several key architectural elements of the computer platform in question. | |
2753 | |
2754 \begin{enumerate} | |
2755 \item The ratio of clock cycles for single precision multiplication versus other simpler operations such as addition, shifting, etc. For example | |
2756 on the AMD Athlon the ratio is roughly $17 : 1$ while on the Intel P4 it is $29 : 1$. The higher the ratio in favour of multiplication the lower | |
2757 the cutoff point $y$ will be. | |
2758 | |
2759 \item The complexity of the linear system of equations (\textit{for the coefficients of $W(x)$}) is. Generally speaking as the number of splits | |
2760 grows the complexity grows substantially. Ideally solving the system will only involve addition, subtraction and shifting of integers. This | |
2761 directly reflects on the ratio previous mentioned. | |
2762 | |
2763 \item To a lesser extent memory bandwidth and function call overheads. Provided the values are in the processor cache this is less of an | |
2764 influence over the cutoff point. | |
2765 | |
2766 \end{enumerate} | |
2767 | |
2768 A clean cutoff point separation occurs when a point $y$ is found such that all of the cutoff point conditions are met. For example, if the point | |
2769 is too low then there will be values of $m$ such that $m > y$ and the Comba method is still faster. Finding the cutoff points is fairly simple when | |
2770 a high resolution timer is available. | |
2771 | |
2772 \subsection{Karatsuba Multiplication} | |
2773 Karatsuba \cite{KARA} multiplication when originally proposed in 1962 was among the first set of algorithms to break the $O(n^2)$ barrier for | |
2774 general purpose multiplication. Given two polynomial basis representations $f(x) = ax + b$ and $g(x) = cx + d$, Karatsuba proved with | |
2775 light algebra \cite{KARAP} that the following polynomial is equivalent to multiplication of the two integers the polynomials represent. | |
2776 | |
2777 \begin{equation} | |
2778 f(x) \cdot g(x) = acx^2 + ((a - b)(c - d) - (ac + bd))x + bd | |
2779 \end{equation} | |
2780 | |
2781 Using the observation that $ac$ and $bd$ could be re-used only three half sized multiplications would be required to produce the product. Applying | |
2782 this algorithm recursively, the work factor becomes $O(n^{lg(3)})$ which is substantially better than the work factor $O(n^2)$ of the Comba technique. It turns | |
2783 out what Karatsuba did not know or at least did not publish was that this is simply polynomial basis multiplication with the points | |
2784 $\zeta_0$, $\zeta_{\infty}$ and $-\zeta_{-1}$. Consider the resultant system of equations. | |
2785 | |
2786 \begin{center} | |
2787 \begin{tabular}{rcrcrcrc} | |
2788 $\zeta_{0}$ & $=$ & & & & & $w_0$ \\ | |
2789 $-\zeta_{-1}$ & $=$ & $-w_2$ & $+$ & $w_1$ & $-$ & $w_0$ \\ | |
2790 $\zeta_{\infty}$ & $=$ & $w_2$ & & & & \\ | |
2791 \end{tabular} | |
2792 \end{center} | |
2793 | |
2794 By adding the first and last equation to the equation in the middle the term $w_1$ can be isolated and all three coefficients solved for. The simplicity | |
2795 of this system of equations has made Karatsuba fairly popular. In fact the cutoff point is often fairly low\footnote{With LibTomMath 0.18 it is 70 and 109 digits for the Intel P4 and AMD Athlon respectively.} | |
2796 making it an ideal algorithm to speed up certain public key cryptosystems such as RSA and Diffie-Hellman. It is worth noting that the point | |
2797 $\zeta_1$ could be substituted for $-\zeta_{-1}$. In this case the first and third row are subtracted instead of added to the second row. | |
2798 | |
2799 \newpage\begin{figure}[!here] | |
2800 \begin{small} | |
2801 \begin{center} | |
2802 \begin{tabular}{l} | |
2803 \hline Algorithm \textbf{mp\_karatsuba\_mul}. \\ | |
2804 \textbf{Input}. mp\_int $a$ and mp\_int $b$ \\ | |
2805 \textbf{Output}. $c \leftarrow \vert a \vert \cdot \vert b \vert$ \\ | |
2806 \hline \\ | |
2807 1. Init the following mp\_int variables: $x0$, $x1$, $y0$, $y1$, $t1$, $x0y0$, $x1y1$.\\ | |
2808 2. If step 2 failed then return(\textit{MP\_MEM}). \\ | |
2809 \\ | |
2810 Split the input. e.g. $a = x1 \cdot \beta^B + x0$ \\ | |
2811 3. $B \leftarrow \mbox{min}(a.used, b.used)/2$ \\ | |
2812 4. $x0 \leftarrow a \mbox{ (mod }\beta^B\mbox{)}$ (\textit{mp\_mod\_2d}) \\ | |
2813 5. $y0 \leftarrow b \mbox{ (mod }\beta^B\mbox{)}$ \\ | |
2814 6. $x1 \leftarrow \lfloor a / \beta^B \rfloor$ (\textit{mp\_rshd}) \\ | |
2815 7. $y1 \leftarrow \lfloor b / \beta^B \rfloor$ \\ | |
2816 \\ | |
2817 Calculate the three products. \\ | |
2818 8. $x0y0 \leftarrow x0 \cdot y0$ (\textit{mp\_mul}) \\ | |
2819 9. $x1y1 \leftarrow x1 \cdot y1$ \\ | |
2820 10. $t1 \leftarrow x1 - x0$ (\textit{mp\_sub}) \\ | |
2821 11. $x0 \leftarrow y1 - y0$ \\ | |
2822 12. $t1 \leftarrow t1 \cdot x0$ \\ | |
2823 \\ | |
2824 Calculate the middle term. \\ | |
2825 13. $x0 \leftarrow x0y0 + x1y1$ \\ | |
2826 14. $t1 \leftarrow x0 - t1$ \\ | |
2827 \\ | |
2828 Calculate the final product. \\ | |
2829 15. $t1 \leftarrow t1 \cdot \beta^B$ (\textit{mp\_lshd}) \\ | |
2830 16. $x1y1 \leftarrow x1y1 \cdot \beta^{2B}$ \\ | |
2831 17. $t1 \leftarrow x0y0 + t1$ \\ | |
2832 18. $c \leftarrow t1 + x1y1$ \\ | |
2833 19. Clear all of the temporary variables. \\ | |
2834 20. Return(\textit{MP\_OKAY}).\\ | |
2835 \hline | |
2836 \end{tabular} | |
2837 \end{center} | |
2838 \end{small} | |
2839 \caption{Algorithm mp\_karatsuba\_mul} | |
2840 \end{figure} | |
2841 | |
2842 \textbf{Algorithm mp\_karatsuba\_mul.} | |
2843 This algorithm computes the unsigned product of two inputs using the Karatsuba multiplication algorithm. It is loosely based on the description | |
2844 from Knuth \cite[pp. 294-295]{TAOCPV2}. | |
2845 | |
2846 \index{radix point} | |
2847 In order to split the two inputs into their respective halves, a suitable \textit{radix point} must be chosen. The radix point chosen must | |
2848 be used for both of the inputs meaning that it must be smaller than the smallest input. Step 3 chooses the radix point $B$ as half of the | |
2849 smallest input \textbf{used} count. After the radix point is chosen the inputs are split into lower and upper halves. Step 4 and 5 | |
2850 compute the lower halves. Step 6 and 7 computer the upper halves. | |
2851 | |
2852 After the halves have been computed the three intermediate half-size products must be computed. Step 8 and 9 compute the trivial products | |
2853 $x0 \cdot y0$ and $x1 \cdot y1$. The mp\_int $x0$ is used as a temporary variable after $x1 - x0$ has been computed. By using $x0$ instead | |
2854 of an additional temporary variable, the algorithm can avoid an addition memory allocation operation. | |
2855 | |
2856 The remaining steps 13 through 18 compute the Karatsuba polynomial through a variety of digit shifting and addition operations. | |
2857 | |
2858 EXAM,bn_mp_karatsuba_mul.c | |
2859 | |
2860 The new coding element in this routine, not seen in previous routines, is the usage of goto statements. The conventional | |
2861 wisdom is that goto statements should be avoided. This is generally true, however when every single function call can fail, it makes sense | |
2862 to handle error recovery with a single piece of code. Lines @61,if@ to @75,if@ handle initializing all of the temporary variables | |
2863 required. Note how each of the if statements goes to a different label in case of failure. This allows the routine to correctly free only | |
2864 the temporaries that have been successfully allocated so far. | |
2865 | |
2866 The temporary variables are all initialized using the mp\_init\_size routine since they are expected to be large. This saves the | |
2867 additional reallocation that would have been necessary. Also $x0$, $x1$, $y0$ and $y1$ have to be able to hold at least their respective | |
2868 number of digits for the next section of code. | |
2869 | |
2870 The first algebraic portion of the algorithm is to split the two inputs into their halves. However, instead of using mp\_mod\_2d and mp\_rshd | |
2871 to extract the halves, the respective code has been placed inline within the body of the function. To initialize the halves, the \textbf{used} and | |
2872 \textbf{sign} members are copied first. The first for loop on line @98,for@ copies the lower halves. Since they are both the same magnitude it | |
2873 is simpler to calculate both lower halves in a single loop. The for loop on lines @104,for@ and @109,for@ calculate the upper halves $x1$ and | |
2874 $y1$ respectively. | |
2875 | |
2876 By inlining the calculation of the halves, the Karatsuba multiplier has a slightly lower overhead and can be used for smaller magnitude inputs. | |
2877 | |
2878 When line @152,err@ is reached, the algorithm has completed succesfully. The ``error status'' variable $err$ is set to \textbf{MP\_OKAY} so that | |
2879 the same code that handles errors can be used to clear the temporary variables and return. | |
2880 | |
2881 \subsection{Toom-Cook $3$-Way Multiplication} | |
2882 Toom-Cook $3$-Way \cite{TOOM} multiplication is essentially the polynomial basis algorithm for $n = 2$ except that the points are | |
2883 chosen such that $\zeta$ is easy to compute and the resulting system of equations easy to reduce. Here, the points $\zeta_{0}$, | |
2884 $16 \cdot \zeta_{1 \over 2}$, $\zeta_1$, $\zeta_2$ and $\zeta_{\infty}$ make up the five required points to solve for the coefficients | |
2885 of the $W(x)$. | |
2886 | |
2887 With the five relations that Toom-Cook specifies, the following system of equations is formed. | |
2888 | |
2889 \begin{center} | |
2890 \begin{tabular}{rcrcrcrcrcr} | |
2891 $\zeta_0$ & $=$ & $0w_4$ & $+$ & $0w_3$ & $+$ & $0w_2$ & $+$ & $0w_1$ & $+$ & $1w_0$ \\ | |
2892 $16 \cdot \zeta_{1 \over 2}$ & $=$ & $1w_4$ & $+$ & $2w_3$ & $+$ & $4w_2$ & $+$ & $8w_1$ & $+$ & $16w_0$ \\ | |
2893 $\zeta_1$ & $=$ & $1w_4$ & $+$ & $1w_3$ & $+$ & $1w_2$ & $+$ & $1w_1$ & $+$ & $1w_0$ \\ | |
2894 $\zeta_2$ & $=$ & $16w_4$ & $+$ & $8w_3$ & $+$ & $4w_2$ & $+$ & $2w_1$ & $+$ & $1w_0$ \\ | |
2895 $\zeta_{\infty}$ & $=$ & $1w_4$ & $+$ & $0w_3$ & $+$ & $0w_2$ & $+$ & $0w_1$ & $+$ & $0w_0$ \\ | |
2896 \end{tabular} | |
2897 \end{center} | |
2898 | |
2899 A trivial solution to this matrix requires $12$ subtractions, two multiplications by a small power of two, two divisions by a small power | |
2900 of two, two divisions by three and one multiplication by three. All of these $19$ sub-operations require less than quadratic time, meaning that | |
2901 the algorithm can be faster than a baseline multiplication. However, the greater complexity of this algorithm places the cutoff point | |
2902 (\textbf{TOOM\_MUL\_CUTOFF}) where Toom-Cook becomes more efficient much higher than the Karatsuba cutoff point. | |
2903 | |
2904 \begin{figure}[!here] | |
2905 \begin{small} | |
2906 \begin{center} | |
2907 \begin{tabular}{l} | |
2908 \hline Algorithm \textbf{mp\_toom\_mul}. \\ | |
2909 \textbf{Input}. mp\_int $a$ and mp\_int $b$ \\ | |
2910 \textbf{Output}. $c \leftarrow a \cdot b $ \\ | |
2911 \hline \\ | |
2912 Split $a$ and $b$ into three pieces. E.g. $a = a_2 \beta^{2k} + a_1 \beta^{k} + a_0$ \\ | |
2913 1. $k \leftarrow \lfloor \mbox{min}(a.used, b.used) / 3 \rfloor$ \\ | |
2914 2. $a_0 \leftarrow a \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2915 3. $a_1 \leftarrow \lfloor a / \beta^k \rfloor$, $a_1 \leftarrow a_1 \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2916 4. $a_2 \leftarrow \lfloor a / \beta^{2k} \rfloor$, $a_2 \leftarrow a_2 \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2917 5. $b_0 \leftarrow a \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2918 6. $b_1 \leftarrow \lfloor a / \beta^k \rfloor$, $b_1 \leftarrow b_1 \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2919 7. $b_2 \leftarrow \lfloor a / \beta^{2k} \rfloor$, $b_2 \leftarrow b_2 \mbox{ (mod }\beta^{k}\mbox{)}$ \\ | |
2920 \\ | |
2921 Find the five equations for $w_0, w_1, ..., w_4$. \\ | |
2922 8. $w_0 \leftarrow a_0 \cdot b_0$ \\ | |
2923 9. $w_4 \leftarrow a_2 \cdot b_2$ \\ | |
2924 10. $tmp_1 \leftarrow 2 \cdot a_0$, $tmp_1 \leftarrow a_1 + tmp_1$, $tmp_1 \leftarrow 2 \cdot tmp_1$, $tmp_1 \leftarrow tmp_1 + a_2$ \\ | |
2925 11. $tmp_2 \leftarrow 2 \cdot b_0$, $tmp_2 \leftarrow b_1 + tmp_2$, $tmp_2 \leftarrow 2 \cdot tmp_2$, $tmp_2 \leftarrow tmp_2 + b_2$ \\ | |
2926 12. $w_1 \leftarrow tmp_1 \cdot tmp_2$ \\ | |
2927 13. $tmp_1 \leftarrow 2 \cdot a_2$, $tmp_1 \leftarrow a_1 + tmp_1$, $tmp_1 \leftarrow 2 \cdot tmp_1$, $tmp_1 \leftarrow tmp_1 + a_0$ \\ | |
2928 14. $tmp_2 \leftarrow 2 \cdot b_2$, $tmp_2 \leftarrow b_1 + tmp_2$, $tmp_2 \leftarrow 2 \cdot tmp_2$, $tmp_2 \leftarrow tmp_2 + b_0$ \\ | |
2929 15. $w_3 \leftarrow tmp_1 \cdot tmp_2$ \\ | |
2930 16. $tmp_1 \leftarrow a_0 + a_1$, $tmp_1 \leftarrow tmp_1 + a_2$, $tmp_2 \leftarrow b_0 + b_1$, $tmp_2 \leftarrow tmp_2 + b_2$ \\ | |
2931 17. $w_2 \leftarrow tmp_1 \cdot tmp_2$ \\ | |
2932 \\ | |
2933 Continued on the next page.\\ | |
2934 \hline | |
2935 \end{tabular} | |
2936 \end{center} | |
2937 \end{small} | |
2938 \caption{Algorithm mp\_toom\_mul} | |
2939 \end{figure} | |
2940 | |
2941 \newpage\begin{figure}[!here] | |
2942 \begin{small} | |
2943 \begin{center} | |
2944 \begin{tabular}{l} | |
2945 \hline Algorithm \textbf{mp\_toom\_mul} (continued). \\ | |
2946 \textbf{Input}. mp\_int $a$ and mp\_int $b$ \\ | |
2947 \textbf{Output}. $c \leftarrow a \cdot b $ \\ | |
2948 \hline \\ | |
2949 Now solve the system of equations. \\ | |
2950 18. $w_1 \leftarrow w_4 - w_1$, $w_3 \leftarrow w_3 - w_0$ \\ | |
2951 19. $w_1 \leftarrow \lfloor w_1 / 2 \rfloor$, $w_3 \leftarrow \lfloor w_3 / 2 \rfloor$ \\ | |
2952 20. $w_2 \leftarrow w_2 - w_0$, $w_2 \leftarrow w_2 - w_4$ \\ | |
2953 21. $w_1 \leftarrow w_1 - w_2$, $w_3 \leftarrow w_3 - w_2$ \\ | |
2954 22. $tmp_1 \leftarrow 8 \cdot w_0$, $w_1 \leftarrow w_1 - tmp_1$, $tmp_1 \leftarrow 8 \cdot w_4$, $w_3 \leftarrow w_3 - tmp_1$ \\ | |
2955 23. $w_2 \leftarrow 3 \cdot w_2$, $w_2 \leftarrow w_2 - w_1$, $w_2 \leftarrow w_2 - w_3$ \\ | |
2956 24. $w_1 \leftarrow w_1 - w_2$, $w_3 \leftarrow w_3 - w_2$ \\ | |
2957 25. $w_1 \leftarrow \lfloor w_1 / 3 \rfloor, w_3 \leftarrow \lfloor w_3 / 3 \rfloor$ \\ | |
2958 \\ | |
2959 Now substitute $\beta^k$ for $x$ by shifting $w_0, w_1, ..., w_4$. \\ | |
2960 26. for $n$ from $1$ to $4$ do \\ | |
2961 \hspace{3mm}26.1 $w_n \leftarrow w_n \cdot \beta^{nk}$ \\ | |
2962 27. $c \leftarrow w_0 + w_1$, $c \leftarrow c + w_2$, $c \leftarrow c + w_3$, $c \leftarrow c + w_4$ \\ | |
2963 28. Return(\textit{MP\_OKAY}) \\ | |
2964 \hline | |
2965 \end{tabular} | |
2966 \end{center} | |
2967 \end{small} | |
2968 \caption{Algorithm mp\_toom\_mul (continued)} | |
2969 \end{figure} | |
2970 | |
2971 \textbf{Algorithm mp\_toom\_mul.} | |
2972 This algorithm computes the product of two mp\_int variables $a$ and $b$ using the Toom-Cook approach. Compared to the Karatsuba multiplication, this | |
2973 algorithm has a lower asymptotic running time of approximately $O(n^{1.464})$ but at an obvious cost in overhead. In this | |
2974 description, several statements have been compounded to save space. The intention is that the statements are executed from left to right across | |
2975 any given step. | |
2976 | |
2977 The two inputs $a$ and $b$ are first split into three $k$-digit integers $a_0, a_1, a_2$ and $b_0, b_1, b_2$ respectively. From these smaller | |
2978 integers the coefficients of the polynomial basis representations $f(x)$ and $g(x)$ are known and can be used to find the relations required. | |
2979 | |
2980 The first two relations $w_0$ and $w_4$ are the points $\zeta_{0}$ and $\zeta_{\infty}$ respectively. The relation $w_1, w_2$ and $w_3$ correspond | |
2981 to the points $16 \cdot \zeta_{1 \over 2}, \zeta_{2}$ and $\zeta_{1}$ respectively. These are found using logical shifts to independently find | |
2982 $f(y)$ and $g(y)$ which significantly speeds up the algorithm. | |
2983 | |
2984 After the five relations $w_0, w_1, \ldots, w_4$ have been computed, the system they represent must be solved in order for the unknown coefficients | |
2985 $w_1, w_2$ and $w_3$ to be isolated. The steps 18 through 25 perform the system reduction required as previously described. Each step of | |
2986 the reduction represents the comparable matrix operation that would be performed had this been performed by pencil. For example, step 18 indicates | |
2987 that row $1$ must be subtracted from row $4$ and simultaneously row $0$ subtracted from row $3$. | |
2988 | |
2989 Once the coeffients have been isolated, the polynomial $W(x) = \sum_{i=0}^{2n} w_i x^i$ is known. By substituting $\beta^{k}$ for $x$, the integer | |
2990 result $a \cdot b$ is produced. | |
2991 | |
2992 EXAM,bn_mp_toom_mul.c | |
2993 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2994 The first obvious thing to note is that this algorithm is complicated. The complexity is worth it if you are multiplying very |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2995 large numbers. For example, a 10,000 digit multiplication takes approximaly 99,282,205 fewer single precision multiplications with |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2996 Toom--Cook than a Comba or baseline approach (this is a savings of more than 99$\%$). For most ``crypto'' sized numbers this |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2997 algorithm is not practical as Karatsuba has a much lower cutoff point. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2998 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
2999 First we split $a$ and $b$ into three roughly equal portions. This has been accomplished (lines @40,mod@ to @69,rshd@) with |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3000 combinations of mp\_rshd() and mp\_mod\_2d() function calls. At this point $a = a2 \cdot \beta^2 + a1 \cdot \beta + a0$ and similiarly |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3001 for $b$. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3002 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3003 Next we compute the five points $w0, w1, w2, w3$ and $w4$. Recall that $w0$ and $w4$ can be computed directly from the portions so |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3004 we get those out of the way first (lines @72,mul@ and @77,mul@). Next we compute $w1, w2$ and $w3$ using Horners method. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3005 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3006 After this point we solve for the actual values of $w1, w2$ and $w3$ by reducing the $5 \times 5$ system which is relatively |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3007 straight forward. |
19 | 3008 |
3009 \subsection{Signed Multiplication} | |
3010 Now that algorithms to handle multiplications of every useful dimensions have been developed, a rather simple finishing touch is required. So far all | |
3011 of the multiplication algorithms have been unsigned multiplications which leaves only a signed multiplication algorithm to be established. | |
3012 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3013 \begin{figure}[!here] |
19 | 3014 \begin{small} |
3015 \begin{center} | |
3016 \begin{tabular}{l} | |
3017 \hline Algorithm \textbf{mp\_mul}. \\ | |
3018 \textbf{Input}. mp\_int $a$ and mp\_int $b$ \\ | |
3019 \textbf{Output}. $c \leftarrow a \cdot b$ \\ | |
3020 \hline \\ | |
3021 1. If $a.sign = b.sign$ then \\ | |
3022 \hspace{3mm}1.1 $sign = MP\_ZPOS$ \\ | |
3023 2. else \\ | |
3024 \hspace{3mm}2.1 $sign = MP\_ZNEG$ \\ | |
3025 3. If min$(a.used, b.used) \ge TOOM\_MUL\_CUTOFF$ then \\ | |
3026 \hspace{3mm}3.1 $c \leftarrow a \cdot b$ using algorithm mp\_toom\_mul \\ | |
3027 4. else if min$(a.used, b.used) \ge KARATSUBA\_MUL\_CUTOFF$ then \\ | |
3028 \hspace{3mm}4.1 $c \leftarrow a \cdot b$ using algorithm mp\_karatsuba\_mul \\ | |
3029 5. else \\ | |
3030 \hspace{3mm}5.1 $digs \leftarrow a.used + b.used + 1$ \\ | |
3031 \hspace{3mm}5.2 If $digs < MP\_ARRAY$ and min$(a.used, b.used) \le \delta$ then \\ | |
3032 \hspace{6mm}5.2.1 $c \leftarrow a \cdot b \mbox{ (mod }\beta^{digs}\mbox{)}$ using algorithm fast\_s\_mp\_mul\_digs. \\ | |
3033 \hspace{3mm}5.3 else \\ | |
3034 \hspace{6mm}5.3.1 $c \leftarrow a \cdot b \mbox{ (mod }\beta^{digs}\mbox{)}$ using algorithm s\_mp\_mul\_digs. \\ | |
3035 6. $c.sign \leftarrow sign$ \\ | |
3036 7. Return the result of the unsigned multiplication performed. \\ | |
3037 \hline | |
3038 \end{tabular} | |
3039 \end{center} | |
3040 \end{small} | |
3041 \caption{Algorithm mp\_mul} | |
3042 \end{figure} | |
3043 | |
3044 \textbf{Algorithm mp\_mul.} | |
3045 This algorithm performs the signed multiplication of two inputs. It will make use of any of the three unsigned multiplication algorithms | |
3046 available when the input is of appropriate size. The \textbf{sign} of the result is not set until the end of the algorithm since algorithm | |
3047 s\_mp\_mul\_digs will clear it. | |
3048 | |
3049 EXAM,bn_mp_mul.c | |
3050 | |
3051 The implementation is rather simplistic and is not particularly noteworthy. Line @22,?@ computes the sign of the result using the ``?'' | |
3052 operator from the C programming language. Line @37,<<@ computes $\delta$ using the fact that $1 << k$ is equal to $2^k$. | |
3053 | |
3054 \section{Squaring} | |
3055 \label{sec:basesquare} | |
3056 | |
3057 Squaring is a special case of multiplication where both multiplicands are equal. At first it may seem like there is no significant optimization | |
3058 available but in fact there is. Consider the multiplication of $576$ against $241$. In total there will be nine single precision multiplications | |
3059 performed which are $1\cdot 6$, $1 \cdot 7$, $1 \cdot 5$, $4 \cdot 6$, $4 \cdot 7$, $4 \cdot 5$, $2 \cdot 6$, $2 \cdot 7$ and $2 \cdot 5$. Now consider | |
3060 the multiplication of $123$ against $123$. The nine products are $3 \cdot 3$, $3 \cdot 2$, $3 \cdot 1$, $2 \cdot 3$, $2 \cdot 2$, $2 \cdot 1$, | |
3061 $1 \cdot 3$, $1 \cdot 2$ and $1 \cdot 1$. On closer inspection some of the products are equivalent. For example, $3 \cdot 2 = 2 \cdot 3$ | |
3062 and $3 \cdot 1 = 1 \cdot 3$. | |
3063 | |
3064 For any $n$-digit input, there are ${{\left (n^2 + n \right)}\over 2}$ possible unique single precision multiplications required compared to the $n^2$ | |
3065 required for multiplication. The following diagram gives an example of the operations required. | |
3066 | |
3067 \begin{figure}[here] | |
3068 \begin{center} | |
3069 \begin{tabular}{ccccc|c} | |
3070 &&1&2&3&\\ | |
3071 $\times$ &&1&2&3&\\ | |
3072 \hline && $3 \cdot 1$ & $3 \cdot 2$ & $3 \cdot 3$ & Row 0\\ | |
3073 & $2 \cdot 1$ & $2 \cdot 2$ & $2 \cdot 3$ && Row 1 \\ | |
3074 $1 \cdot 1$ & $1 \cdot 2$ & $1 \cdot 3$ &&& Row 2 \\ | |
3075 \end{tabular} | |
3076 \end{center} | |
3077 \caption{Squaring Optimization Diagram} | |
3078 \end{figure} | |
3079 | |
3080 MARK,SQUARE | |
3081 Starting from zero and numbering the columns from right to left a very simple pattern becomes obvious. For the purposes of this discussion let $x$ | |
3082 represent the number being squared. The first observation is that in row $k$ the $2k$'th column of the product has a $\left (x_k \right)^2$ term in it. | |
3083 | |
3084 The second observation is that every column $j$ in row $k$ where $j \ne 2k$ is part of a double product. Every non-square term of a column will | |
3085 appear twice hence the name ``double product''. Every odd column is made up entirely of double products. In fact every column is made up of double | |
3086 products and at most one square (\textit{see the exercise section}). | |
3087 | |
3088 The third and final observation is that for row $k$ the first unique non-square term, that is, one that hasn't already appeared in an earlier row, | |
3089 occurs at column $2k + 1$. For example, on row $1$ of the previous squaring, column one is part of the double product with column one from row zero. | |
3090 Column two of row one is a square and column three is the first unique column. | |
3091 | |
3092 \subsection{The Baseline Squaring Algorithm} | |
3093 The baseline squaring algorithm is meant to be a catch-all squaring algorithm. It will handle any of the input sizes that the faster routines | |
3094 will not handle. | |
3095 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3096 \begin{figure}[!here] |
19 | 3097 \begin{small} |
3098 \begin{center} | |
3099 \begin{tabular}{l} | |
3100 \hline Algorithm \textbf{s\_mp\_sqr}. \\ | |
3101 \textbf{Input}. mp\_int $a$ \\ | |
3102 \textbf{Output}. $b \leftarrow a^2$ \\ | |
3103 \hline \\ | |
3104 1. Init a temporary mp\_int of at least $2 \cdot a.used +1$ digits. (\textit{mp\_init\_size}) \\ | |
3105 2. If step 1 failed return(\textit{MP\_MEM}) \\ | |
3106 3. $t.used \leftarrow 2 \cdot a.used + 1$ \\ | |
3107 4. For $ix$ from 0 to $a.used - 1$ do \\ | |
3108 \hspace{3mm}Calculate the square. \\ | |
3109 \hspace{3mm}4.1 $\hat r \leftarrow t_{2ix} + \left (a_{ix} \right )^2$ \\ | |
3110 \hspace{3mm}4.2 $t_{2ix} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
3111 \hspace{3mm}Calculate the double products after the square. \\ | |
3112 \hspace{3mm}4.3 $u \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
3113 \hspace{3mm}4.4 For $iy$ from $ix + 1$ to $a.used - 1$ do \\ | |
3114 \hspace{6mm}4.4.1 $\hat r \leftarrow 2 \cdot a_{ix}a_{iy} + t_{ix + iy} + u$ \\ | |
3115 \hspace{6mm}4.4.2 $t_{ix + iy} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
3116 \hspace{6mm}4.4.3 $u \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
3117 \hspace{3mm}Set the last carry. \\ | |
3118 \hspace{3mm}4.5 While $u > 0$ do \\ | |
3119 \hspace{6mm}4.5.1 $iy \leftarrow iy + 1$ \\ | |
3120 \hspace{6mm}4.5.2 $\hat r \leftarrow t_{ix + iy} + u$ \\ | |
3121 \hspace{6mm}4.5.3 $t_{ix + iy} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
3122 \hspace{6mm}4.5.4 $u \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
3123 5. Clamp excess digits of $t$. (\textit{mp\_clamp}) \\ | |
3124 6. Exchange $b$ and $t$. \\ | |
3125 7. Clear $t$ (\textit{mp\_clear}) \\ | |
3126 8. Return(\textit{MP\_OKAY}) \\ | |
3127 \hline | |
3128 \end{tabular} | |
3129 \end{center} | |
3130 \end{small} | |
3131 \caption{Algorithm s\_mp\_sqr} | |
3132 \end{figure} | |
3133 | |
3134 \textbf{Algorithm s\_mp\_sqr.} | |
3135 This algorithm computes the square of an input using the three observations on squaring. It is based fairly faithfully on algorithm 14.16 of HAC | |
3136 \cite[pp.596-597]{HAC}. Similar to algorithm s\_mp\_mul\_digs, a temporary mp\_int is allocated to hold the result of the squaring. This allows the | |
3137 destination mp\_int to be the same as the source mp\_int. | |
3138 | |
3139 The outer loop of this algorithm begins on step 4. It is best to think of the outer loop as walking down the rows of the partial results, while | |
3140 the inner loop computes the columns of the partial result. Step 4.1 and 4.2 compute the square term for each row, and step 4.3 and 4.4 propagate | |
3141 the carry and compute the double products. | |
3142 | |
3143 The requirement that a mp\_word be able to represent the range $0 \le x < 2 \beta^2$ arises from this | |
3144 very algorithm. The product $a_{ix}a_{iy}$ will lie in the range $0 \le x \le \beta^2 - 2\beta + 1$ which is obviously less than $\beta^2$ meaning that | |
3145 when it is multiplied by two, it can be properly represented by a mp\_word. | |
3146 | |
3147 Similar to algorithm s\_mp\_mul\_digs, after every pass of the inner loop, the destination is correctly set to the sum of all of the partial | |
3148 results calculated so far. This involves expensive carry propagation which will be eliminated in the next algorithm. | |
3149 | |
3150 EXAM,bn_s_mp_sqr.c | |
3151 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3152 Inside the outer loop (line @32,for@) the square term is calculated on line @35,r =@. The carry (line @42,>>@) has been |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3153 extracted from the mp\_word accumulator using a right shift. Aliases for $a_{ix}$ and $t_{ix+iy}$ are initialized |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3154 (lines @45,tmpx@ and @48,tmpt@) to simplify the inner loop. The doubling is performed using two |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3155 additions (line @57,r + r@) since it is usually faster than shifting, if not at least as fast. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3156 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3157 The important observation is that the inner loop does not begin at $iy = 0$ like for multiplication. As such the inner loops |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3158 get progressively shorter as the algorithm proceeds. This is what leads to the savings compared to using a multiplication to |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3159 square a number. |
19 | 3160 |
3161 \subsection{Faster Squaring by the ``Comba'' Method} | |
3162 A major drawback to the baseline method is the requirement for single precision shifting inside the $O(n^2)$ nested loop. Squaring has an additional | |
3163 drawback that it must double the product inside the inner loop as well. As for multiplication, the Comba technique can be used to eliminate these | |
3164 performance hazards. | |
3165 | |
3166 The first obvious solution is to make an array of mp\_words which will hold all of the columns. This will indeed eliminate all of the carry | |
3167 propagation operations from the inner loop. However, the inner product must still be doubled $O(n^2)$ times. The solution stems from the simple fact | |
3168 that $2a + 2b + 2c = 2(a + b + c)$. That is the sum of all of the double products is equal to double the sum of all the products. For example, | |
3169 $ab + ba + ac + ca = 2ab + 2ac = 2(ab + ac)$. | |
3170 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3171 However, we cannot simply double all of the columns, since the squares appear only once per row. The most practical solution is to have two |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3172 mp\_word arrays. One array will hold the squares and the other array will hold the double products. With both arrays the doubling and |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3173 carry propagation can be moved to a $O(n)$ work level outside the $O(n^2)$ level. In this case, we have an even simpler solution in mind. |
19 | 3174 |
3175 \newpage\begin{figure}[!here] | |
3176 \begin{small} | |
3177 \begin{center} | |
3178 \begin{tabular}{l} | |
3179 \hline Algorithm \textbf{fast\_s\_mp\_sqr}. \\ | |
3180 \textbf{Input}. mp\_int $a$ \\ | |
3181 \textbf{Output}. $b \leftarrow a^2$ \\ | |
3182 \hline \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3183 Place an array of \textbf{MP\_WARRAY} mp\_digits named $W$ on the stack. \\ |
19 | 3184 1. If $b.alloc < 2a.used + 1$ then grow $b$ to $2a.used + 1$ digits. (\textit{mp\_grow}). \\ |
3185 2. If step 1 failed return(\textit{MP\_MEM}). \\ | |
3186 \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3187 3. $pa \leftarrow 2 \cdot a.used$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3188 4. $\hat W1 \leftarrow 0$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3189 5. for $ix$ from $0$ to $pa - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3190 \hspace{3mm}5.1 $\_ \hat W \leftarrow 0$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3191 \hspace{3mm}5.2 $ty \leftarrow \mbox{MIN}(a.used - 1, ix)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3192 \hspace{3mm}5.3 $tx \leftarrow ix - ty$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3193 \hspace{3mm}5.4 $iy \leftarrow \mbox{MIN}(a.used - tx, ty + 1)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3194 \hspace{3mm}5.5 $iy \leftarrow \mbox{MIN}(iy, \lfloor \left (ty - tx + 1 \right )/2 \rfloor)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3195 \hspace{3mm}5.6 for $iz$ from $0$ to $iz - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3196 \hspace{6mm}5.6.1 $\_ \hat W \leftarrow \_ \hat W + a_{tx + iz}a_{ty - iz}$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3197 \hspace{3mm}5.7 $\_ \hat W \leftarrow 2 \cdot \_ \hat W + \hat W1$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3198 \hspace{3mm}5.8 if $ix$ is even then \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3199 \hspace{6mm}5.8.1 $\_ \hat W \leftarrow \_ \hat W + \left ( a_{\lfloor ix/2 \rfloor}\right )^2$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3200 \hspace{3mm}5.9 $W_{ix} \leftarrow \_ \hat W (\mbox{mod }\beta)$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3201 \hspace{3mm}5.10 $\hat W1 \leftarrow \lfloor \_ \hat W / \beta \rfloor$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3202 \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3203 6. $oldused \leftarrow b.used$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3204 7. $b.used \leftarrow 2 \cdot a.used$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3205 8. for $ix$ from $0$ to $pa - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3206 \hspace{3mm}8.1 $b_{ix} \leftarrow W_{ix}$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3207 9. for $ix$ from $pa$ to $oldused - 1$ do \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3208 \hspace{3mm}9.1 $b_{ix} \leftarrow 0$ \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3209 10. Clamp excess digits from $b$. (\textit{mp\_clamp}) \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3210 11. Return(\textit{MP\_OKAY}). \\ |
19 | 3211 \hline |
3212 \end{tabular} | |
3213 \end{center} | |
3214 \end{small} | |
3215 \caption{Algorithm fast\_s\_mp\_sqr} | |
3216 \end{figure} | |
3217 | |
3218 \textbf{Algorithm fast\_s\_mp\_sqr.} | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3219 This algorithm computes the square of an input using the Comba technique. It is designed to be a replacement for algorithm |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3220 s\_mp\_sqr when the number of input digits is less than \textbf{MP\_WARRAY} and less than $\delta \over 2$. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3221 This algorithm is very similar to the Comba multiplier except with a few key differences we shall make note of. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3222 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3223 First, we have an accumulator and carry variables $\_ \hat W$ and $\hat W1$ respectively. This is because the inner loop |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3224 products are to be doubled. If we had added the previous carry in we would be doubling too much. Next we perform an |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3225 addition MIN condition on $iy$ (step 5.5) to prevent overlapping digits. For example, $a_3 \cdot a_5$ is equal |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3226 $a_5 \cdot a_3$. Whereas in the multiplication case we would have $5 < a.used$ and $3 \ge 0$ is maintained since we double the sum |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3227 of the products just outside the inner loop we have to avoid doing this. This is also a good thing since we perform |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3228 fewer multiplications and the routine ends up being faster. |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3229 |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3230 Finally the last difference is the addition of the ``square'' term outside the inner loop (step 5.8). We add in the square |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3231 only to even outputs and it is the square of the term at the $\lfloor ix / 2 \rfloor$ position. |
19 | 3232 |
3233 EXAM,bn_fast_s_mp_sqr.c | |
3234 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3235 This implementation is essentially a copy of Comba multiplication with the appropriate changes added to make it faster for |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3236 the special case of squaring. |
19 | 3237 |
3238 \subsection{Polynomial Basis Squaring} | |
3239 The same algorithm that performs optimal polynomial basis multiplication can be used to perform polynomial basis squaring. The minor exception | |
3240 is that $\zeta_y = f(y)g(y)$ is actually equivalent to $\zeta_y = f(y)^2$ since $f(y) = g(y)$. Instead of performing $2n + 1$ | |
3241 multiplications to find the $\zeta$ relations, squaring operations are performed instead. | |
3242 | |
3243 \subsection{Karatsuba Squaring} | |
3244 Let $f(x) = ax + b$ represent the polynomial basis representation of a number to square. | |
3245 Let $h(x) = \left ( f(x) \right )^2$ represent the square of the polynomial. The Karatsuba equation can be modified to square a | |
3246 number with the following equation. | |
3247 | |
3248 \begin{equation} | |
3249 h(x) = a^2x^2 + \left (a^2 + b^2 - (a - b)^2 \right )x + b^2 | |
3250 \end{equation} | |
3251 | |
3252 Upon closer inspection this equation only requires the calculation of three half-sized squares: $a^2$, $b^2$ and $(a - b)^2$. As in | |
3253 Karatsuba multiplication, this algorithm can be applied recursively on the input and will achieve an asymptotic running time of | |
3254 $O \left ( n^{lg(3)} \right )$. | |
3255 | |
3256 If the asymptotic times of Karatsuba squaring and multiplication are the same, why not simply use the multiplication algorithm | |
3257 instead? The answer to this arises from the cutoff point for squaring. As in multiplication there exists a cutoff point, at which the | |
3258 time required for a Comba based squaring and a Karatsuba based squaring meet. Due to the overhead inherent in the Karatsuba method, the cutoff | |
3259 point is fairly high. For example, on an AMD Athlon XP processor with $\beta = 2^{28}$, the cutoff point is around 127 digits. | |
3260 | |
3261 Consider squaring a 200 digit number with this technique. It will be split into two 100 digit halves which are subsequently squared. | |
3262 The 100 digit halves will not be squared using Karatsuba, but instead using the faster Comba based squaring algorithm. If Karatsuba multiplication | |
3263 were used instead, the 100 digit numbers would be squared with a slower Comba based multiplication. | |
3264 | |
3265 \newpage\begin{figure}[!here] | |
3266 \begin{small} | |
3267 \begin{center} | |
3268 \begin{tabular}{l} | |
3269 \hline Algorithm \textbf{mp\_karatsuba\_sqr}. \\ | |
3270 \textbf{Input}. mp\_int $a$ \\ | |
3271 \textbf{Output}. $b \leftarrow a^2$ \\ | |
3272 \hline \\ | |
3273 1. Initialize the following temporary mp\_ints: $x0$, $x1$, $t1$, $t2$, $x0x0$ and $x1x1$. \\ | |
3274 2. If any of the initializations on step 1 failed return(\textit{MP\_MEM}). \\ | |
3275 \\ | |
3276 Split the input. e.g. $a = x1\beta^B + x0$ \\ | |
3277 3. $B \leftarrow \lfloor a.used / 2 \rfloor$ \\ | |
3278 4. $x0 \leftarrow a \mbox{ (mod }\beta^B\mbox{)}$ (\textit{mp\_mod\_2d}) \\ | |
3279 5. $x1 \leftarrow \lfloor a / \beta^B \rfloor$ (\textit{mp\_lshd}) \\ | |
3280 \\ | |
3281 Calculate the three squares. \\ | |
3282 6. $x0x0 \leftarrow x0^2$ (\textit{mp\_sqr}) \\ | |
3283 7. $x1x1 \leftarrow x1^2$ \\ | |
3284 8. $t1 \leftarrow x1 - x0$ (\textit{mp\_sub}) \\ | |
3285 9. $t1 \leftarrow t1^2$ \\ | |
3286 \\ | |
3287 Compute the middle term. \\ | |
3288 10. $t2 \leftarrow x0x0 + x1x1$ (\textit{s\_mp\_add}) \\ | |
3289 11. $t1 \leftarrow t2 - t1$ \\ | |
3290 \\ | |
3291 Compute final product. \\ | |
3292 12. $t1 \leftarrow t1\beta^B$ (\textit{mp\_lshd}) \\ | |
3293 13. $x1x1 \leftarrow x1x1\beta^{2B}$ \\ | |
3294 14. $t1 \leftarrow t1 + x0x0$ \\ | |
3295 15. $b \leftarrow t1 + x1x1$ \\ | |
3296 16. Return(\textit{MP\_OKAY}). \\ | |
3297 \hline | |
3298 \end{tabular} | |
3299 \end{center} | |
3300 \end{small} | |
3301 \caption{Algorithm mp\_karatsuba\_sqr} | |
3302 \end{figure} | |
3303 | |
3304 \textbf{Algorithm mp\_karatsuba\_sqr.} | |
3305 This algorithm computes the square of an input $a$ using the Karatsuba technique. This algorithm is very similar to the Karatsuba based | |
3306 multiplication algorithm with the exception that the three half-size multiplications have been replaced with three half-size squarings. | |
3307 | |
3308 The radix point for squaring is simply placed exactly in the middle of the digits when the input has an odd number of digits, otherwise it is | |
3309 placed just below the middle. Step 3, 4 and 5 compute the two halves required using $B$ | |
3310 as the radix point. The first two squares in steps 6 and 7 are rather straightforward while the last square is of a more compact form. | |
3311 | |
3312 By expanding $\left (x1 - x0 \right )^2$, the $x1^2$ and $x0^2$ terms in the middle disappear, that is $x1^2 + x0^2 - (x1 - x0)^2 = 2 \cdot x0 \cdot x1$. | |
3313 Now if $5n$ single precision additions and a squaring of $n$-digits is faster than multiplying two $n$-digit numbers and doubling then | |
3314 this method is faster. Assuming no further recursions occur, the difference can be estimated with the following inequality. | |
3315 | |
3316 Let $p$ represent the cost of a single precision addition and $q$ the cost of a single precision multiplication both in terms of time\footnote{Or | |
3317 machine clock cycles.}. | |
3318 | |
3319 \begin{equation} | |
3320 5pn +{{q(n^2 + n)} \over 2} \le pn + qn^2 | |
3321 \end{equation} | |
3322 | |
3323 For example, on an AMD Athlon XP processor $p = {1 \over 3}$ and $q = 6$. This implies that the following inequality should hold. | |
3324 \begin{center} | |
3325 \begin{tabular}{rcl} | |
3326 ${5n \over 3} + 3n^2 + 3n$ & $<$ & ${n \over 3} + 6n^2$ \\ | |
3327 ${5 \over 3} + 3n + 3$ & $<$ & ${1 \over 3} + 6n$ \\ | |
3328 ${13 \over 9}$ & $<$ & $n$ \\ | |
3329 \end{tabular} | |
3330 \end{center} | |
3331 | |
3332 This results in a cutoff point around $n = 2$. As a consequence it is actually faster to compute the middle term the ``long way'' on processors | |
3333 where multiplication is substantially slower\footnote{On the Athlon there is a 1:17 ratio between clock cycles for addition and multiplication. On | |
3334 the Intel P4 processor this ratio is 1:29 making this method even more beneficial. The only common exception is the ARMv4 processor which has a | |
3335 ratio of 1:7. } than simpler operations such as addition. | |
3336 | |
3337 EXAM,bn_mp_karatsuba_sqr.c | |
3338 | |
3339 This implementation is largely based on the implementation of algorithm mp\_karatsuba\_mul. It uses the same inline style to copy and | |
3340 shift the input into the two halves. The loop from line @54,{@ to line @70,}@ has been modified since only one input exists. The \textbf{used} | |
3341 count of both $x0$ and $x1$ is fixed up and $x0$ is clamped before the calculations begin. At this point $x1$ and $x0$ are valid equivalents | |
3342 to the respective halves as if mp\_rshd and mp\_mod\_2d had been used. | |
3343 | |
3344 By inlining the copy and shift operations the cutoff point for Karatsuba multiplication can be lowered. On the Athlon the cutoff point | |
3345 is exactly at the point where Comba squaring can no longer be used (\textit{128 digits}). On slower processors such as the Intel P4 | |
3346 it is actually below the Comba limit (\textit{at 110 digits}). | |
3347 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3348 This routine uses the same error trap coding style as mp\_karatsuba\_sqr. As the temporary variables are initialized errors are |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3349 redirected to the error trap higher up. If the algorithm completes without error the error code is set to \textbf{MP\_OKAY} and |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3350 mp\_clears are executed normally. |
19 | 3351 |
3352 \subsection{Toom-Cook Squaring} | |
3353 The Toom-Cook squaring algorithm mp\_toom\_sqr is heavily based on the algorithm mp\_toom\_mul with the exception that squarings are used | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3354 instead of multiplication to find the five relations. The reader is encouraged to read the description of the latter algorithm and try to |
19 | 3355 derive their own Toom-Cook squaring algorithm. |
3356 | |
3357 \subsection{High Level Squaring} | |
3358 \newpage\begin{figure}[!here] | |
3359 \begin{small} | |
3360 \begin{center} | |
3361 \begin{tabular}{l} | |
3362 \hline Algorithm \textbf{mp\_sqr}. \\ | |
3363 \textbf{Input}. mp\_int $a$ \\ | |
3364 \textbf{Output}. $b \leftarrow a^2$ \\ | |
3365 \hline \\ | |
3366 1. If $a.used \ge TOOM\_SQR\_CUTOFF$ then \\ | |
3367 \hspace{3mm}1.1 $b \leftarrow a^2$ using algorithm mp\_toom\_sqr \\ | |
3368 2. else if $a.used \ge KARATSUBA\_SQR\_CUTOFF$ then \\ | |
3369 \hspace{3mm}2.1 $b \leftarrow a^2$ using algorithm mp\_karatsuba\_sqr \\ | |
3370 3. else \\ | |
3371 \hspace{3mm}3.1 $digs \leftarrow a.used + b.used + 1$ \\ | |
3372 \hspace{3mm}3.2 If $digs < MP\_ARRAY$ and $a.used \le \delta$ then \\ | |
3373 \hspace{6mm}3.2.1 $b \leftarrow a^2$ using algorithm fast\_s\_mp\_sqr. \\ | |
3374 \hspace{3mm}3.3 else \\ | |
3375 \hspace{6mm}3.3.1 $b \leftarrow a^2$ using algorithm s\_mp\_sqr. \\ | |
3376 4. $b.sign \leftarrow MP\_ZPOS$ \\ | |
3377 5. Return the result of the unsigned squaring performed. \\ | |
3378 \hline | |
3379 \end{tabular} | |
3380 \end{center} | |
3381 \end{small} | |
3382 \caption{Algorithm mp\_sqr} | |
3383 \end{figure} | |
3384 | |
3385 \textbf{Algorithm mp\_sqr.} | |
3386 This algorithm computes the square of the input using one of four different algorithms. If the input is very large and has at least | |
3387 \textbf{TOOM\_SQR\_CUTOFF} or \textbf{KARATSUBA\_SQR\_CUTOFF} digits then either the Toom-Cook or the Karatsuba Squaring algorithm is used. If | |
3388 neither of the polynomial basis algorithms should be used then either the Comba or baseline algorithm is used. | |
3389 | |
3390 EXAM,bn_mp_sqr.c | |
3391 | |
3392 \section*{Exercises} | |
3393 \begin{tabular}{cl} | |
3394 $\left [ 3 \right ] $ & Devise an efficient algorithm for selection of the radix point to handle inputs \\ | |
3395 & that have different number of digits in Karatsuba multiplication. \\ | |
3396 & \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3397 $\left [ 2 \right ] $ & In ~SQUARE~ the fact that every column of a squaring is made up \\ |
19 | 3398 & of double products and at most one square is stated. Prove this statement. \\ |
3399 & \\ | |
3400 $\left [ 3 \right ] $ & Prove the equation for Karatsuba squaring. \\ | |
3401 & \\ | |
3402 $\left [ 1 \right ] $ & Prove that Karatsuba squaring requires $O \left (n^{lg(3)} \right )$ time. \\ | |
3403 & \\ | |
3404 $\left [ 2 \right ] $ & Determine the minimal ratio between addition and multiplication clock cycles \\ | |
3405 & required for equation $6.7$ to be true. \\ | |
3406 & \\ | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3407 $\left [ 3 \right ] $ & Implement a threaded version of Comba multiplication (and squaring) where you \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3408 & compute subsets of the columns in each thread. Determine a cutoff point where \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3409 & it is effective and add the logic to mp\_mul() and mp\_sqr(). \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3410 &\\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3411 $\left [ 4 \right ] $ & Same as the previous but also modify the Karatsuba and Toom-Cook. You must \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3412 & increase the throughput of mp\_exptmod() for random odd moduli in the range \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3413 & $512 \ldots 4096$ bits significantly ($> 2x$) to complete this challenge. \\ |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3414 & \\ |
19 | 3415 \end{tabular} |
3416 | |
3417 \chapter{Modular Reduction} | |
3418 MARK,REDUCTION | |
3419 \section{Basics of Modular Reduction} | |
3420 \index{modular residue} | |
3421 Modular reduction is an operation that arises quite often within public key cryptography algorithms and various number theoretic algorithms, | |
3422 such as factoring. Modular reduction algorithms are the third class of algorithms of the ``multipliers'' set. A number $a$ is said to be \textit{reduced} | |
3423 modulo another number $b$ by finding the remainder of the division $a/b$. Full integer division with remainder is a topic to be covered | |
3424 in~\ref{sec:division}. | |
3425 | |
3426 Modular reduction is equivalent to solving for $r$ in the following equation. $a = bq + r$ where $q = \lfloor a/b \rfloor$. The result | |
3427 $r$ is said to be ``congruent to $a$ modulo $b$'' which is also written as $r \equiv a \mbox{ (mod }b\mbox{)}$. In other vernacular $r$ is known as the | |
3428 ``modular residue'' which leads to ``quadratic residue''\footnote{That's fancy talk for $b \equiv a^2 \mbox{ (mod }p\mbox{)}$.} and | |
3429 other forms of residues. | |
3430 | |
3431 Modular reductions are normally used to create either finite groups, rings or fields. The most common usage for performance driven modular reductions | |
3432 is in modular exponentiation algorithms. That is to compute $d = a^b \mbox{ (mod }c\mbox{)}$ as fast as possible. This operation is used in the | |
3433 RSA and Diffie-Hellman public key algorithms, for example. Modular multiplication and squaring also appears as a fundamental operation in | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3434 elliptic curve cryptographic algorithms. As will be discussed in the subsequent chapter there exist fast algorithms for computing modular |
19 | 3435 exponentiations without having to perform (\textit{in this example}) $b - 1$ multiplications. These algorithms will produce partial results in the |
3436 range $0 \le x < c^2$ which can be taken advantage of to create several efficient algorithms. They have also been used to create redundancy check | |
3437 algorithms known as CRCs, error correction codes such as Reed-Solomon and solve a variety of number theoeretic problems. | |
3438 | |
3439 \section{The Barrett Reduction} | |
3440 The Barrett reduction algorithm \cite{BARRETT} was inspired by fast division algorithms which multiply by the reciprocal to emulate | |
3441 division. Barretts observation was that the residue $c$ of $a$ modulo $b$ is equal to | |
3442 | |
3443 \begin{equation} | |
3444 c = a - b \cdot \lfloor a/b \rfloor | |
3445 \end{equation} | |
3446 | |
3447 Since algorithms such as modular exponentiation would be using the same modulus extensively, typical DSP\footnote{It is worth noting that Barrett's paper | |
3448 targeted the DSP56K processor.} intuition would indicate the next step would be to replace $a/b$ by a multiplication by the reciprocal. However, | |
3449 DSP intuition on its own will not work as these numbers are considerably larger than the precision of common DSP floating point data types. | |
3450 It would take another common optimization to optimize the algorithm. | |
3451 | |
3452 \subsection{Fixed Point Arithmetic} | |
3453 The trick used to optimize the above equation is based on a technique of emulating floating point data types with fixed precision integers. Fixed | |
3454 point arithmetic would become very popular as it greatly optimize the ``3d-shooter'' genre of games in the mid 1990s when floating point units were | |
3455 fairly slow if not unavailable. The idea behind fixed point arithmetic is to take a normal $k$-bit integer data type and break it into $p$-bit | |
3456 integer and a $q$-bit fraction part (\textit{where $p+q = k$}). | |
3457 | |
3458 In this system a $k$-bit integer $n$ would actually represent $n/2^q$. For example, with $q = 4$ the integer $n = 37$ would actually represent the | |
3459 value $2.3125$. To multiply two fixed point numbers the integers are multiplied using traditional arithmetic and subsequently normalized by | |
3460 moving the implied decimal point back to where it should be. For example, with $q = 4$ to multiply the integers $9$ and $5$ they must be converted | |
3461 to fixed point first by multiplying by $2^q$. Let $a = 9(2^q)$ represent the fixed point representation of $9$ and $b = 5(2^q)$ represent the | |
3462 fixed point representation of $5$. The product $ab$ is equal to $45(2^{2q})$ which when normalized by dividing by $2^q$ produces $45(2^q)$. | |
3463 | |
3464 This technique became popular since a normal integer multiplication and logical shift right are the only required operations to perform a multiplication | |
3465 of two fixed point numbers. Using fixed point arithmetic, division can be easily approximated by multiplying by the reciprocal. If $2^q$ is | |
3466 equivalent to one than $2^q/b$ is equivalent to the fixed point approximation of $1/b$ using real arithmetic. Using this fact dividing an integer | |
3467 $a$ by another integer $b$ can be achieved with the following expression. | |
3468 | |
3469 \begin{equation} | |
3470 \lfloor a / b \rfloor \mbox{ }\approx\mbox{ } \lfloor (a \cdot \lfloor 2^q / b \rfloor)/2^q \rfloor | |
3471 \end{equation} | |
3472 | |
3473 The precision of the division is proportional to the value of $q$. If the divisor $b$ is used frequently as is the case with | |
3474 modular exponentiation pre-computing $2^q/b$ will allow a division to be performed with a multiplication and a right shift. Both operations | |
3475 are considerably faster than division on most processors. | |
3476 | |
3477 Consider dividing $19$ by $5$. The correct result is $\lfloor 19/5 \rfloor = 3$. With $q = 3$ the reciprocal is $\lfloor 2^q/5 \rfloor = 1$ which | |
3478 leads to a product of $19$ which when divided by $2^q$ produces $2$. However, with $q = 4$ the reciprocal is $\lfloor 2^q/5 \rfloor = 3$ and | |
3479 the result of the emulated division is $\lfloor 3 \cdot 19 / 2^q \rfloor = 3$ which is correct. The value of $2^q$ must be close to or ideally | |
3480 larger than the dividend. In effect if $a$ is the dividend then $q$ should allow $0 \le \lfloor a/2^q \rfloor \le 1$ in order for this approach | |
3481 to work correctly. Plugging this form of divison into the original equation the following modular residue equation arises. | |
3482 | |
3483 \begin{equation} | |
3484 c = a - b \cdot \lfloor (a \cdot \lfloor 2^q / b \rfloor)/2^q \rfloor | |
3485 \end{equation} | |
3486 | |
3487 Using the notation from \cite{BARRETT} the value of $\lfloor 2^q / b \rfloor$ will be represented by the $\mu$ symbol. Using the $\mu$ | |
3488 variable also helps re-inforce the idea that it is meant to be computed once and re-used. | |
3489 | |
3490 \begin{equation} | |
3491 c = a - b \cdot \lfloor (a \cdot \mu)/2^q \rfloor | |
3492 \end{equation} | |
3493 | |
3494 Provided that $2^q \ge a$ this algorithm will produce a quotient that is either exactly correct or off by a value of one. In the context of Barrett | |
3495 reduction the value of $a$ is bound by $0 \le a \le (b - 1)^2$ meaning that $2^q \ge b^2$ is sufficient to ensure the reciprocal will have enough | |
3496 precision. | |
3497 | |
3498 Let $n$ represent the number of digits in $b$. This algorithm requires approximately $2n^2$ single precision multiplications to produce the quotient and | |
3499 another $n^2$ single precision multiplications to find the residue. In total $3n^2$ single precision multiplications are required to | |
3500 reduce the number. | |
3501 | |
3502 For example, if $b = 1179677$ and $q = 41$ ($2^q > b^2$), then the reciprocal $\mu$ is equal to $\lfloor 2^q / b \rfloor = 1864089$. Consider reducing | |
3503 $a = 180388626447$ modulo $b$ using the above reduction equation. The quotient using the new formula is $\lfloor (a \cdot \mu) / 2^q \rfloor = 152913$. | |
3504 By subtracting $152913b$ from $a$ the correct residue $a \equiv 677346 \mbox{ (mod }b\mbox{)}$ is found. | |
3505 | |
3506 \subsection{Choosing a Radix Point} | |
3507 Using the fixed point representation a modular reduction can be performed with $3n^2$ single precision multiplications. If that were the best | |
3508 that could be achieved a full division\footnote{A division requires approximately $O(2cn^2)$ single precision multiplications for a small value of $c$. | |
3509 See~\ref{sec:division} for further details.} might as well be used in its place. The key to optimizing the reduction is to reduce the precision of | |
3510 the initial multiplication that finds the quotient. | |
3511 | |
3512 Let $a$ represent the number of which the residue is sought. Let $b$ represent the modulus used to find the residue. Let $m$ represent | |
3513 the number of digits in $b$. For the purposes of this discussion we will assume that the number of digits in $a$ is $2m$, which is generally true if | |
3514 two $m$-digit numbers have been multiplied. Dividing $a$ by $b$ is the same as dividing a $2m$ digit integer by a $m$ digit integer. Digits below the | |
3515 $m - 1$'th digit of $a$ will contribute at most a value of $1$ to the quotient because $\beta^k < b$ for any $0 \le k \le m - 1$. Another way to | |
3516 express this is by re-writing $a$ as two parts. If $a' \equiv a \mbox{ (mod }b^m\mbox{)}$ and $a'' = a - a'$ then | |
3517 ${a \over b} \equiv {{a' + a''} \over b}$ which is equivalent to ${a' \over b} + {a'' \over b}$. Since $a'$ is bound to be less than $b$ the quotient | |
3518 is bound by $0 \le {a' \over b} < 1$. | |
3519 | |
3520 Since the digits of $a'$ do not contribute much to the quotient the observation is that they might as well be zero. However, if the digits | |
3521 ``might as well be zero'' they might as well not be there in the first place. Let $q_0 = \lfloor a/\beta^{m-1} \rfloor$ represent the input | |
3522 with the irrelevant digits trimmed. Now the modular reduction is trimmed to the almost equivalent equation | |
3523 | |
3524 \begin{equation} | |
3525 c = a - b \cdot \lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor | |
3526 \end{equation} | |
3527 | |
3528 Note that the original divisor $2^q$ has been replaced with $\beta^{m+1}$ where in this case $q$ is a multiple of $lg(\beta)$. Also note that the | |
3529 exponent on the divisor when added to the amount $q_0$ was shifted by equals $2m$. If the optimization had not been performed the divisor | |
3530 would have the exponent $2m$ so in the end the exponents do ``add up''. Using the above equation the quotient | |
3531 $\lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor$ can be off from the true quotient by at most two. The original fixed point quotient can be off | |
3532 by as much as one (\textit{provided the radix point is chosen suitably}) and now that the lower irrelevent digits have been trimmed the quotient | |
3533 can be off by an additional value of one for a total of at most two. This implies that | |
3534 $0 \le a - b \cdot \lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor < 3b$. By first subtracting $b$ times the quotient and then conditionally subtracting | |
3535 $b$ once or twice the residue is found. | |
3536 | |
3537 The quotient is now found using $(m + 1)(m) = m^2 + m$ single precision multiplications and the residue with an additional $m^2$ single | |
3538 precision multiplications, ignoring the subtractions required. In total $2m^2 + m$ single precision multiplications are required to find the residue. | |
3539 This is considerably faster than the original attempt. | |
3540 | |
3541 For example, let $\beta = 10$ represent the radix of the digits. Let $b = 9999$ represent the modulus which implies $m = 4$. Let $a = 99929878$ | |
3542 represent the value of which the residue is desired. In this case $q = 8$ since $10^7 < 9999^2$ meaning that $\mu = \lfloor \beta^{q}/b \rfloor = 10001$. | |
3543 With the new observation the multiplicand for the quotient is equal to $q_0 = \lfloor a / \beta^{m - 1} \rfloor = 99929$. The quotient is then | |
3544 $\lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor = 9993$. Subtracting $9993b$ from $a$ and the correct residue $a \equiv 9871 \mbox{ (mod }b\mbox{)}$ | |
3545 is found. | |
3546 | |
3547 \subsection{Trimming the Quotient} | |
3548 So far the reduction algorithm has been optimized from $3m^2$ single precision multiplications down to $2m^2 + m$ single precision multiplications. As | |
3549 it stands now the algorithm is already fairly fast compared to a full integer division algorithm. However, there is still room for | |
3550 optimization. | |
3551 | |
3552 After the first multiplication inside the quotient ($q_0 \cdot \mu$) the value is shifted right by $m + 1$ places effectively nullifying the lower | |
3553 half of the product. It would be nice to be able to remove those digits from the product to effectively cut down the number of single precision | |
3554 multiplications. If the number of digits in the modulus $m$ is far less than $\beta$ a full product is not required for the algorithm to work properly. | |
3555 In fact the lower $m - 2$ digits will not affect the upper half of the product at all and do not need to be computed. | |
3556 | |
3557 The value of $\mu$ is a $m$-digit number and $q_0$ is a $m + 1$ digit number. Using a full multiplier $(m + 1)(m) = m^2 + m$ single precision | |
3558 multiplications would be required. Using a multiplier that will only produce digits at and above the $m - 1$'th digit reduces the number | |
3559 of single precision multiplications to ${m^2 + m} \over 2$ single precision multiplications. | |
3560 | |
3561 \subsection{Trimming the Residue} | |
3562 After the quotient has been calculated it is used to reduce the input. As previously noted the algorithm is not exact and it can be off by a small | |
3563 multiple of the modulus, that is $0 \le a - b \cdot \lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor < 3b$. If $b$ is $m$ digits than the | |
3564 result of reduction equation is a value of at most $m + 1$ digits (\textit{provided $3 < \beta$}) implying that the upper $m - 1$ digits are | |
3565 implicitly zero. | |
3566 | |
3567 The next optimization arises from this very fact. Instead of computing $b \cdot \lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor$ using a full | |
3568 $O(m^2)$ multiplication algorithm only the lower $m+1$ digits of the product have to be computed. Similarly the value of $a$ can | |
3569 be reduced modulo $\beta^{m+1}$ before the multiple of $b$ is subtracted which simplifes the subtraction as well. A multiplication that produces | |
3570 only the lower $m+1$ digits requires ${m^2 + 3m - 2} \over 2$ single precision multiplications. | |
3571 | |
3572 With both optimizations in place the algorithm is the algorithm Barrett proposed. It requires $m^2 + 2m - 1$ single precision multiplications which | |
3573 is considerably faster than the straightforward $3m^2$ method. | |
3574 | |
3575 \subsection{The Barrett Algorithm} | |
3576 \newpage\begin{figure}[!here] | |
3577 \begin{small} | |
3578 \begin{center} | |
3579 \begin{tabular}{l} | |
3580 \hline Algorithm \textbf{mp\_reduce}. \\ | |
3581 \textbf{Input}. mp\_int $a$, mp\_int $b$ and $\mu = \lfloor \beta^{2m}/b \rfloor, m = \lceil lg_{\beta}(b) \rceil, (0 \le a < b^2, b > 1)$ \\ | |
3582 \textbf{Output}. $a \mbox{ (mod }b\mbox{)}$ \\ | |
3583 \hline \\ | |
3584 Let $m$ represent the number of digits in $b$. \\ | |
3585 1. Make a copy of $a$ and store it in $q$. (\textit{mp\_init\_copy}) \\ | |
3586 2. $q \leftarrow \lfloor q / \beta^{m - 1} \rfloor$ (\textit{mp\_rshd}) \\ | |
3587 \\ | |
3588 Produce the quotient. \\ | |
3589 3. $q \leftarrow q \cdot \mu$ (\textit{note: only produce digits at or above $m-1$}) \\ | |
3590 4. $q \leftarrow \lfloor q / \beta^{m + 1} \rfloor$ \\ | |
3591 \\ | |
3592 Subtract the multiple of modulus from the input. \\ | |
3593 5. $a \leftarrow a \mbox{ (mod }\beta^{m+1}\mbox{)}$ (\textit{mp\_mod\_2d}) \\ | |
3594 6. $q \leftarrow q \cdot b \mbox{ (mod }\beta^{m+1}\mbox{)}$ (\textit{s\_mp\_mul\_digs}) \\ | |
3595 7. $a \leftarrow a - q$ (\textit{mp\_sub}) \\ | |
3596 \\ | |
3597 Add $\beta^{m+1}$ if a carry occured. \\ | |
3598 8. If $a < 0$ then (\textit{mp\_cmp\_d}) \\ | |
3599 \hspace{3mm}8.1 $q \leftarrow 1$ (\textit{mp\_set}) \\ | |
3600 \hspace{3mm}8.2 $q \leftarrow q \cdot \beta^{m+1}$ (\textit{mp\_lshd}) \\ | |
3601 \hspace{3mm}8.3 $a \leftarrow a + q$ \\ | |
3602 \\ | |
3603 Now subtract the modulus if the residue is too large (e.g. quotient too small). \\ | |
3604 9. While $a \ge b$ do (\textit{mp\_cmp}) \\ | |
3605 \hspace{3mm}9.1 $c \leftarrow a - b$ \\ | |
3606 10. Clear $q$. \\ | |
3607 11. Return(\textit{MP\_OKAY}) \\ | |
3608 \hline | |
3609 \end{tabular} | |
3610 \end{center} | |
3611 \end{small} | |
3612 \caption{Algorithm mp\_reduce} | |
3613 \end{figure} | |
3614 | |
3615 \textbf{Algorithm mp\_reduce.} | |
3616 This algorithm will reduce the input $a$ modulo $b$ in place using the Barrett algorithm. It is loosely based on algorithm 14.42 of HAC | |
3617 \cite[pp. 602]{HAC} which is based on the paper from Paul Barrett \cite{BARRETT}. The algorithm has several restrictions and assumptions which must | |
3618 be adhered to for the algorithm to work. | |
3619 | |
3620 First the modulus $b$ is assumed to be positive and greater than one. If the modulus were less than or equal to one than subtracting | |
3621 a multiple of it would either accomplish nothing or actually enlarge the input. The input $a$ must be in the range $0 \le a < b^2$ in order | |
3622 for the quotient to have enough precision. If $a$ is the product of two numbers that were already reduced modulo $b$, this will not be a problem. | |
3623 Technically the algorithm will still work if $a \ge b^2$ but it will take much longer to finish. The value of $\mu$ is passed as an argument to this | |
3624 algorithm and is assumed to be calculated and stored before the algorithm is used. | |
3625 | |
3626 Recall that the multiplication for the quotient on step 3 must only produce digits at or above the $m-1$'th position. An algorithm called | |
3627 $s\_mp\_mul\_high\_digs$ which has not been presented is used to accomplish this task. The algorithm is based on $s\_mp\_mul\_digs$ except that | |
3628 instead of stopping at a given level of precision it starts at a given level of precision. This optimal algorithm can only be used if the number | |
3629 of digits in $b$ is very much smaller than $\beta$. | |
3630 | |
3631 While it is known that | |
3632 $a \ge b \cdot \lfloor (q_0 \cdot \mu) / \beta^{m+1} \rfloor$ only the lower $m+1$ digits are being used to compute the residue, so an implied | |
3633 ``borrow'' from the higher digits might leave a negative result. After the multiple of the modulus has been subtracted from $a$ the residue must be | |
3634 fixed up in case it is negative. The invariant $\beta^{m+1}$ must be added to the residue to make it positive again. | |
3635 | |
3636 The while loop at step 9 will subtract $b$ until the residue is less than $b$. If the algorithm is performed correctly this step is | |
3637 performed at most twice, and on average once. However, if $a \ge b^2$ than it will iterate substantially more times than it should. | |
3638 | |
3639 EXAM,bn_mp_reduce.c | |
3640 | |
3641 The first multiplication that determines the quotient can be performed by only producing the digits from $m - 1$ and up. This essentially halves | |
3642 the number of single precision multiplications required. However, the optimization is only safe if $\beta$ is much larger than the number of digits | |
3643 in the modulus. In the source code this is evaluated on lines @36,if@ to @44,}@ where algorithm s\_mp\_mul\_high\_digs is used when it is | |
3644 safe to do so. | |
3645 | |
3646 \subsection{The Barrett Setup Algorithm} | |
3647 In order to use algorithm mp\_reduce the value of $\mu$ must be calculated in advance. Ideally this value should be computed once and stored for | |
3648 future use so that the Barrett algorithm can be used without delay. | |
3649 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
3650 \newpage\begin{figure}[!here] |
19 | 3651 \begin{small} |
3652 \begin{center} | |
3653 \begin{tabular}{l} | |
3654 \hline Algorithm \textbf{mp\_reduce\_setup}. \\ | |
3655 \textbf{Input}. mp\_int $a$ ($a > 1$) \\ | |
3656 \textbf{Output}. $\mu \leftarrow \lfloor \beta^{2m}/a \rfloor$ \\ | |
3657 \hline \\ | |
3658 1. $\mu \leftarrow 2^{2 \cdot lg(\beta) \cdot m}$ (\textit{mp\_2expt}) \\ | |
3659 2. $\mu \leftarrow \lfloor \mu / b \rfloor$ (\textit{mp\_div}) \\ | |
3660 3. Return(\textit{MP\_OKAY}) \\ | |
3661 \hline | |
3662 \end{tabular} | |
3663 \end{center} | |
3664 \end{small} | |
3665 \caption{Algorithm mp\_reduce\_setup} | |
3666 \end{figure} | |
3667 | |
3668 \textbf{Algorithm mp\_reduce\_setup.} | |
3669 This algorithm computes the reciprocal $\mu$ required for Barrett reduction. First $\beta^{2m}$ is calculated as $2^{2 \cdot lg(\beta) \cdot m}$ which | |
3670 is equivalent and much faster. The final value is computed by taking the integer quotient of $\lfloor \mu / b \rfloor$. | |
3671 | |
3672 EXAM,bn_mp_reduce_setup.c | |
3673 | |
3674 This simple routine calculates the reciprocal $\mu$ required by Barrett reduction. Note the extended usage of algorithm mp\_div where the variable | |
3675 which would received the remainder is passed as NULL. As will be discussed in~\ref{sec:division} the division routine allows both the quotient and the | |
3676 remainder to be passed as NULL meaning to ignore the value. | |
3677 | |
3678 \section{The Montgomery Reduction} | |
3679 Montgomery reduction\footnote{Thanks to Niels Ferguson for his insightful explanation of the algorithm.} \cite{MONT} is by far the most interesting | |
3680 form of reduction in common use. It computes a modular residue which is not actually equal to the residue of the input yet instead equal to a | |
3681 residue times a constant. However, as perplexing as this may sound the algorithm is relatively simple and very efficient. | |
3682 | |
3683 Throughout this entire section the variable $n$ will represent the modulus used to form the residue. As will be discussed shortly the value of | |
3684 $n$ must be odd. The variable $x$ will represent the quantity of which the residue is sought. Similar to the Barrett algorithm the input | |
3685 is restricted to $0 \le x < n^2$. To begin the description some simple number theory facts must be established. | |
3686 | |
3687 \textbf{Fact 1.} Adding $n$ to $x$ does not change the residue since in effect it adds one to the quotient $\lfloor x / n \rfloor$. Another way | |
3688 to explain this is that $n$ is (\textit{or multiples of $n$ are}) congruent to zero modulo $n$. Adding zero will not change the value of the residue. | |
3689 | |
3690 \textbf{Fact 2.} If $x$ is even then performing a division by two in $\Z$ is congruent to $x \cdot 2^{-1} \mbox{ (mod }n\mbox{)}$. Actually | |
3691 this is an application of the fact that if $x$ is evenly divisible by any $k \in \Z$ then division in $\Z$ will be congruent to | |
3692 multiplication by $k^{-1}$ modulo $n$. | |
3693 | |
3694 From these two simple facts the following simple algorithm can be derived. | |
3695 | |
3696 \newpage\begin{figure}[!here] | |
3697 \begin{small} | |
3698 \begin{center} | |
3699 \begin{tabular}{l} | |
3700 \hline Algorithm \textbf{Montgomery Reduction}. \\ | |
3701 \textbf{Input}. Integer $x$, $n$ and $k$ \\ | |
3702 \textbf{Output}. $2^{-k}x \mbox{ (mod }n\mbox{)}$ \\ | |
3703 \hline \\ | |
3704 1. for $t$ from $1$ to $k$ do \\ | |
3705 \hspace{3mm}1.1 If $x$ is odd then \\ | |
3706 \hspace{6mm}1.1.1 $x \leftarrow x + n$ \\ | |
3707 \hspace{3mm}1.2 $x \leftarrow x/2$ \\ | |
3708 2. Return $x$. \\ | |
3709 \hline | |
3710 \end{tabular} | |
3711 \end{center} | |
3712 \end{small} | |
3713 \caption{Algorithm Montgomery Reduction} | |
3714 \end{figure} | |
3715 | |
3716 The algorithm reduces the input one bit at a time using the two congruencies stated previously. Inside the loop $n$, which is odd, is | |
3717 added to $x$ if $x$ is odd. This forces $x$ to be even which allows the division by two in $\Z$ to be congruent to a modular division by two. Since | |
3718 $x$ is assumed to be initially much larger than $n$ the addition of $n$ will contribute an insignificant magnitude to $x$. Let $r$ represent the | |
3719 final result of the Montgomery algorithm. If $k > lg(n)$ and $0 \le x < n^2$ then the final result is limited to | |
3720 $0 \le r < \lfloor x/2^k \rfloor + n$. As a result at most a single subtraction is required to get the residue desired. | |
3721 | |
3722 \begin{figure}[here] | |
3723 \begin{small} | |
3724 \begin{center} | |
3725 \begin{tabular}{|c|l|} | |
3726 \hline \textbf{Step number ($t$)} & \textbf{Result ($x$)} \\ | |
3727 \hline $1$ & $x + n = 5812$, $x/2 = 2906$ \\ | |
3728 \hline $2$ & $x/2 = 1453$ \\ | |
3729 \hline $3$ & $x + n = 1710$, $x/2 = 855$ \\ | |
3730 \hline $4$ & $x + n = 1112$, $x/2 = 556$ \\ | |
3731 \hline $5$ & $x/2 = 278$ \\ | |
3732 \hline $6$ & $x/2 = 139$ \\ | |
3733 \hline $7$ & $x + n = 396$, $x/2 = 198$ \\ | |
3734 \hline $8$ & $x/2 = 99$ \\ | |
3735 \hline | |
3736 \end{tabular} | |
3737 \end{center} | |
3738 \end{small} | |
3739 \caption{Example of Montgomery Reduction (I)} | |
3740 \label{fig:MONT1} | |
3741 \end{figure} | |
3742 | |
3743 Consider the example in figure~\ref{fig:MONT1} which reduces $x = 5555$ modulo $n = 257$ when $k = 8$. The result of the algorithm $r = 99$ is | |
3744 congruent to the value of $2^{-8} \cdot 5555 \mbox{ (mod }257\mbox{)}$. When $r$ is multiplied by $2^8$ modulo $257$ the correct residue | |
3745 $r \equiv 158$ is produced. | |
3746 | |
3747 Let $k = \lfloor lg(n) \rfloor + 1$ represent the number of bits in $n$. The current algorithm requires $2k^2$ single precision shifts | |
3748 and $k^2$ single precision additions. At this rate the algorithm is most certainly slower than Barrett reduction and not terribly useful. | |
3749 Fortunately there exists an alternative representation of the algorithm. | |
3750 | |
3751 \begin{figure}[!here] | |
3752 \begin{small} | |
3753 \begin{center} | |
3754 \begin{tabular}{l} | |
3755 \hline Algorithm \textbf{Montgomery Reduction} (modified I). \\ | |
3756 \textbf{Input}. Integer $x$, $n$ and $k$ \\ | |
3757 \textbf{Output}. $2^{-k}x \mbox{ (mod }n\mbox{)}$ \\ | |
3758 \hline \\ | |
3759 1. for $t$ from $0$ to $k - 1$ do \\ | |
3760 \hspace{3mm}1.1 If the $t$'th bit of $x$ is one then \\ | |
3761 \hspace{6mm}1.1.1 $x \leftarrow x + 2^tn$ \\ | |
3762 2. Return $x/2^k$. \\ | |
3763 \hline | |
3764 \end{tabular} | |
3765 \end{center} | |
3766 \end{small} | |
3767 \caption{Algorithm Montgomery Reduction (modified I)} | |
3768 \end{figure} | |
3769 | |
3770 This algorithm is equivalent since $2^tn$ is a multiple of $n$ and the lower $k$ bits of $x$ are zero by step 2. The number of single | |
3771 precision shifts has now been reduced from $2k^2$ to $k^2 + k$ which is only a small improvement. | |
3772 | |
3773 \begin{figure}[here] | |
3774 \begin{small} | |
3775 \begin{center} | |
3776 \begin{tabular}{|c|l|r|} | |
3777 \hline \textbf{Step number ($t$)} & \textbf{Result ($x$)} & \textbf{Result ($x$) in Binary} \\ | |
3778 \hline -- & $5555$ & $1010110110011$ \\ | |
3779 \hline $1$ & $x + 2^{0}n = 5812$ & $1011010110100$ \\ | |
3780 \hline $2$ & $5812$ & $1011010110100$ \\ | |
3781 \hline $3$ & $x + 2^{2}n = 6840$ & $1101010111000$ \\ | |
3782 \hline $4$ & $x + 2^{3}n = 8896$ & $10001011000000$ \\ | |
3783 \hline $5$ & $8896$ & $10001011000000$ \\ | |
3784 \hline $6$ & $8896$ & $10001011000000$ \\ | |
3785 \hline $7$ & $x + 2^{6}n = 25344$ & $110001100000000$ \\ | |
3786 \hline $8$ & $25344$ & $110001100000000$ \\ | |
3787 \hline -- & $x/2^k = 99$ & \\ | |
3788 \hline | |
3789 \end{tabular} | |
3790 \end{center} | |
3791 \end{small} | |
3792 \caption{Example of Montgomery Reduction (II)} | |
3793 \label{fig:MONT2} | |
3794 \end{figure} | |
3795 | |
3796 Figure~\ref{fig:MONT2} demonstrates the modified algorithm reducing $x = 5555$ modulo $n = 257$ with $k = 8$. | |
3797 With this algorithm a single shift right at the end is the only right shift required to reduce the input instead of $k$ right shifts inside the | |
3798 loop. Note that for the iterations $t = 2, 5, 6$ and $8$ where the result $x$ is not changed. In those iterations the $t$'th bit of $x$ is | |
3799 zero and the appropriate multiple of $n$ does not need to be added to force the $t$'th bit of the result to zero. | |
3800 | |
3801 \subsection{Digit Based Montgomery Reduction} | |
3802 Instead of computing the reduction on a bit-by-bit basis it is actually much faster to compute it on digit-by-digit basis. Consider the | |
3803 previous algorithm re-written to compute the Montgomery reduction in this new fashion. | |
3804 | |
3805 \begin{figure}[!here] | |
3806 \begin{small} | |
3807 \begin{center} | |
3808 \begin{tabular}{l} | |
3809 \hline Algorithm \textbf{Montgomery Reduction} (modified II). \\ | |
3810 \textbf{Input}. Integer $x$, $n$ and $k$ \\ | |
3811 \textbf{Output}. $\beta^{-k}x \mbox{ (mod }n\mbox{)}$ \\ | |
3812 \hline \\ | |
3813 1. for $t$ from $0$ to $k - 1$ do \\ | |
3814 \hspace{3mm}1.1 $x \leftarrow x + \mu n \beta^t$ \\ | |
3815 2. Return $x/\beta^k$. \\ | |
3816 \hline | |
3817 \end{tabular} | |
3818 \end{center} | |
3819 \end{small} | |
3820 \caption{Algorithm Montgomery Reduction (modified II)} | |
3821 \end{figure} | |
3822 | |
3823 The value $\mu n \beta^t$ is a multiple of the modulus $n$ meaning that it will not change the residue. If the first digit of | |
3824 the value $\mu n \beta^t$ equals the negative (modulo $\beta$) of the $t$'th digit of $x$ then the addition will result in a zero digit. This | |
3825 problem breaks down to solving the following congruency. | |
3826 | |
3827 \begin{center} | |
3828 \begin{tabular}{rcl} | |
3829 $x_t + \mu n_0$ & $\equiv$ & $0 \mbox{ (mod }\beta\mbox{)}$ \\ | |
3830 $\mu n_0$ & $\equiv$ & $-x_t \mbox{ (mod }\beta\mbox{)}$ \\ | |
3831 $\mu$ & $\equiv$ & $-x_t/n_0 \mbox{ (mod }\beta\mbox{)}$ \\ | |
3832 \end{tabular} | |
3833 \end{center} | |
3834 | |
3835 In each iteration of the loop on step 1 a new value of $\mu$ must be calculated. The value of $-1/n_0 \mbox{ (mod }\beta\mbox{)}$ is used | |
3836 extensively in this algorithm and should be precomputed. Let $\rho$ represent the negative of the modular inverse of $n_0$ modulo $\beta$. | |
3837 | |
3838 For example, let $\beta = 10$ represent the radix. Let $n = 17$ represent the modulus which implies $k = 2$ and $\rho \equiv 7$. Let $x = 33$ | |
3839 represent the value to reduce. | |
3840 | |
3841 \newpage\begin{figure} | |
3842 \begin{center} | |
3843 \begin{tabular}{|c|c|c|} | |
3844 \hline \textbf{Step ($t$)} & \textbf{Value of $x$} & \textbf{Value of $\mu$} \\ | |
3845 \hline -- & $33$ & --\\ | |
3846 \hline $0$ & $33 + \mu n = 50$ & $1$ \\ | |
3847 \hline $1$ & $50 + \mu n \beta = 900$ & $5$ \\ | |
3848 \hline | |
3849 \end{tabular} | |
3850 \end{center} | |
3851 \caption{Example of Montgomery Reduction} | |
3852 \end{figure} | |
3853 | |
3854 The final result $900$ is then divided by $\beta^k$ to produce the final result $9$. The first observation is that $9 \nequiv x \mbox{ (mod }n\mbox{)}$ | |
3855 which implies the result is not the modular residue of $x$ modulo $n$. However, recall that the residue is actually multiplied by $\beta^{-k}$ in | |
3856 the algorithm. To get the true residue the value must be multiplied by $\beta^k$. In this case $\beta^k \equiv 15 \mbox{ (mod }n\mbox{)}$ and | |
3857 the correct residue is $9 \cdot 15 \equiv 16 \mbox{ (mod }n\mbox{)}$. | |
3858 | |
3859 \subsection{Baseline Montgomery Reduction} | |
3860 The baseline Montgomery reduction algorithm will produce the residue for any size input. It is designed to be a catch-all algororithm for | |
3861 Montgomery reductions. | |
3862 | |
3863 \newpage\begin{figure}[!here] | |
3864 \begin{small} | |
3865 \begin{center} | |
3866 \begin{tabular}{l} | |
3867 \hline Algorithm \textbf{mp\_montgomery\_reduce}. \\ | |
3868 \textbf{Input}. mp\_int $x$, mp\_int $n$ and a digit $\rho \equiv -1/n_0 \mbox{ (mod }n\mbox{)}$. \\ | |
3869 \hspace{11.5mm}($0 \le x < n^2, n > 1, (n, \beta) = 1, \beta^k > n$) \\ | |
3870 \textbf{Output}. $\beta^{-k}x \mbox{ (mod }n\mbox{)}$ \\ | |
3871 \hline \\ | |
3872 1. $digs \leftarrow 2n.used + 1$ \\ | |
3873 2. If $digs < MP\_ARRAY$ and $m.used < \delta$ then \\ | |
3874 \hspace{3mm}2.1 Use algorithm fast\_mp\_montgomery\_reduce instead. \\ | |
3875 \\ | |
3876 Setup $x$ for the reduction. \\ | |
3877 3. If $x.alloc < digs$ then grow $x$ to $digs$ digits. \\ | |
3878 4. $x.used \leftarrow digs$ \\ | |
3879 \\ | |
3880 Eliminate the lower $k$ digits. \\ | |
3881 5. For $ix$ from $0$ to $k - 1$ do \\ | |
3882 \hspace{3mm}5.1 $\mu \leftarrow x_{ix} \cdot \rho \mbox{ (mod }\beta\mbox{)}$ \\ | |
3883 \hspace{3mm}5.2 $u \leftarrow 0$ \\ | |
3884 \hspace{3mm}5.3 For $iy$ from $0$ to $k - 1$ do \\ | |
3885 \hspace{6mm}5.3.1 $\hat r \leftarrow \mu n_{iy} + x_{ix + iy} + u$ \\ | |
3886 \hspace{6mm}5.3.2 $x_{ix + iy} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
3887 \hspace{6mm}5.3.3 $u \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
3888 \hspace{3mm}5.4 While $u > 0$ do \\ | |
3889 \hspace{6mm}5.4.1 $iy \leftarrow iy + 1$ \\ | |
3890 \hspace{6mm}5.4.2 $x_{ix + iy} \leftarrow x_{ix + iy} + u$ \\ | |
3891 \hspace{6mm}5.4.3 $u \leftarrow \lfloor x_{ix+iy} / \beta \rfloor$ \\ | |
3892 \hspace{6mm}5.4.4 $x_{ix + iy} \leftarrow x_{ix+iy} \mbox{ (mod }\beta\mbox{)}$ \\ | |
3893 \\ | |
3894 Divide by $\beta^k$ and fix up as required. \\ | |
3895 6. $x \leftarrow \lfloor x / \beta^k \rfloor$ \\ | |
3896 7. If $x \ge n$ then \\ | |
3897 \hspace{3mm}7.1 $x \leftarrow x - n$ \\ | |
3898 8. Return(\textit{MP\_OKAY}). \\ | |
3899 \hline | |
3900 \end{tabular} | |
3901 \end{center} | |
3902 \end{small} | |
3903 \caption{Algorithm mp\_montgomery\_reduce} | |
3904 \end{figure} | |
3905 | |
3906 \textbf{Algorithm mp\_montgomery\_reduce.} | |
3907 This algorithm reduces the input $x$ modulo $n$ in place using the Montgomery reduction algorithm. The algorithm is loosely based | |
3908 on algorithm 14.32 of \cite[pp.601]{HAC} except it merges the multiplication of $\mu n \beta^t$ with the addition in the inner loop. The | |
3909 restrictions on this algorithm are fairly easy to adapt to. First $0 \le x < n^2$ bounds the input to numbers in the same range as | |
3910 for the Barrett algorithm. Additionally if $n > 1$ and $n$ is odd there will exist a modular inverse $\rho$. $\rho$ must be calculated in | |
3911 advance of this algorithm. Finally the variable $k$ is fixed and a pseudonym for $n.used$. | |
3912 | |
3913 Step 2 decides whether a faster Montgomery algorithm can be used. It is based on the Comba technique meaning that there are limits on | |
3914 the size of the input. This algorithm is discussed in ~COMBARED~. | |
3915 | |
3916 Step 5 is the main reduction loop of the algorithm. The value of $\mu$ is calculated once per iteration in the outer loop. The inner loop | |
3917 calculates $x + \mu n \beta^{ix}$ by multiplying $\mu n$ and adding the result to $x$ shifted by $ix$ digits. Both the addition and | |
3918 multiplication are performed in the same loop to save time and memory. Step 5.4 will handle any additional carries that escape the inner loop. | |
3919 | |
3920 Using a quick inspection this algorithm requires $n$ single precision multiplications for the outer loop and $n^2$ single precision multiplications | |
3921 in the inner loop. In total $n^2 + n$ single precision multiplications which compares favourably to Barrett at $n^2 + 2n - 1$ single precision | |
3922 multiplications. | |
3923 | |
3924 EXAM,bn_mp_montgomery_reduce.c | |
3925 | |
3926 This is the baseline implementation of the Montgomery reduction algorithm. Lines @30,digs@ to @35,}@ determine if the Comba based | |
3927 routine can be used instead. Line @47,mu@ computes the value of $\mu$ for that particular iteration of the outer loop. | |
3928 | |
3929 The multiplication $\mu n \beta^{ix}$ is performed in one step in the inner loop. The alias $tmpx$ refers to the $ix$'th digit of $x$ and | |
3930 the alias $tmpn$ refers to the modulus $n$. | |
3931 | |
3932 \subsection{Faster ``Comba'' Montgomery Reduction} | |
3933 MARK,COMBARED | |
3934 | |
3935 The Montgomery reduction requires fewer single precision multiplications than a Barrett reduction, however it is much slower due to the serial | |
3936 nature of the inner loop. The Barrett reduction algorithm requires two slightly modified multipliers which can be implemented with the Comba | |
3937 technique. The Montgomery reduction algorithm cannot directly use the Comba technique to any significant advantage since the inner loop calculates | |
3938 a $k \times 1$ product $k$ times. | |
3939 | |
3940 The biggest obstacle is that at the $ix$'th iteration of the outer loop the value of $x_{ix}$ is required to calculate $\mu$. This means the | |
3941 carries from $0$ to $ix - 1$ must have been propagated upwards to form a valid $ix$'th digit. The solution as it turns out is very simple. | |
3942 Perform a Comba like multiplier and inside the outer loop just after the inner loop fix up the $ix + 1$'th digit by forwarding the carry. | |
3943 | |
3944 With this change in place the Montgomery reduction algorithm can be performed with a Comba style multiplication loop which substantially increases | |
3945 the speed of the algorithm. | |
3946 | |
3947 \newpage\begin{figure}[!here] | |
3948 \begin{small} | |
3949 \begin{center} | |
3950 \begin{tabular}{l} | |
3951 \hline Algorithm \textbf{fast\_mp\_montgomery\_reduce}. \\ | |
3952 \textbf{Input}. mp\_int $x$, mp\_int $n$ and a digit $\rho \equiv -1/n_0 \mbox{ (mod }n\mbox{)}$. \\ | |
3953 \hspace{11.5mm}($0 \le x < n^2, n > 1, (n, \beta) = 1, \beta^k > n$) \\ | |
3954 \textbf{Output}. $\beta^{-k}x \mbox{ (mod }n\mbox{)}$ \\ | |
3955 \hline \\ | |
3956 Place an array of \textbf{MP\_WARRAY} mp\_word variables called $\hat W$ on the stack. \\ | |
3957 1. if $x.alloc < n.used + 1$ then grow $x$ to $n.used + 1$ digits. \\ | |
3958 Copy the digits of $x$ into the array $\hat W$ \\ | |
3959 2. For $ix$ from $0$ to $x.used - 1$ do \\ | |
3960 \hspace{3mm}2.1 $\hat W_{ix} \leftarrow x_{ix}$ \\ | |
3961 3. For $ix$ from $x.used$ to $2n.used - 1$ do \\ | |
3962 \hspace{3mm}3.1 $\hat W_{ix} \leftarrow 0$ \\ | |
3963 Elimiate the lower $k$ digits. \\ | |
3964 4. for $ix$ from $0$ to $n.used - 1$ do \\ | |
3965 \hspace{3mm}4.1 $\mu \leftarrow \hat W_{ix} \cdot \rho \mbox{ (mod }\beta\mbox{)}$ \\ | |
3966 \hspace{3mm}4.2 For $iy$ from $0$ to $n.used - 1$ do \\ | |
3967 \hspace{6mm}4.2.1 $\hat W_{iy + ix} \leftarrow \hat W_{iy + ix} + \mu \cdot n_{iy}$ \\ | |
3968 \hspace{3mm}4.3 $\hat W_{ix + 1} \leftarrow \hat W_{ix + 1} + \lfloor \hat W_{ix} / \beta \rfloor$ \\ | |
3969 Propagate carries upwards. \\ | |
3970 5. for $ix$ from $n.used$ to $2n.used + 1$ do \\ | |
3971 \hspace{3mm}5.1 $\hat W_{ix + 1} \leftarrow \hat W_{ix + 1} + \lfloor \hat W_{ix} / \beta \rfloor$ \\ | |
3972 Shift right and reduce modulo $\beta$ simultaneously. \\ | |
3973 6. for $ix$ from $0$ to $n.used + 1$ do \\ | |
3974 \hspace{3mm}6.1 $x_{ix} \leftarrow \hat W_{ix + n.used} \mbox{ (mod }\beta\mbox{)}$ \\ | |
3975 Zero excess digits and fixup $x$. \\ | |
3976 7. if $x.used > n.used + 1$ then do \\ | |
3977 \hspace{3mm}7.1 for $ix$ from $n.used + 1$ to $x.used - 1$ do \\ | |
3978 \hspace{6mm}7.1.1 $x_{ix} \leftarrow 0$ \\ | |
3979 8. $x.used \leftarrow n.used + 1$ \\ | |
3980 9. Clamp excessive digits of $x$. \\ | |
3981 10. If $x \ge n$ then \\ | |
3982 \hspace{3mm}10.1 $x \leftarrow x - n$ \\ | |
3983 11. Return(\textit{MP\_OKAY}). \\ | |
3984 \hline | |
3985 \end{tabular} | |
3986 \end{center} | |
3987 \end{small} | |
3988 \caption{Algorithm fast\_mp\_montgomery\_reduce} | |
3989 \end{figure} | |
3990 | |
3991 \textbf{Algorithm fast\_mp\_montgomery\_reduce.} | |
3992 This algorithm will compute the Montgomery reduction of $x$ modulo $n$ using the Comba technique. It is on most computer platforms significantly | |
3993 faster than algorithm mp\_montgomery\_reduce and algorithm mp\_reduce (\textit{Barrett reduction}). The algorithm has the same restrictions | |
3994 on the input as the baseline reduction algorithm. An additional two restrictions are imposed on this algorithm. The number of digits $k$ in the | |
3995 the modulus $n$ must not violate $MP\_WARRAY > 2k +1$ and $n < \delta$. When $\beta = 2^{28}$ this algorithm can be used to reduce modulo | |
3996 a modulus of at most $3,556$ bits in length. | |
3997 | |
3998 As in the other Comba reduction algorithms there is a $\hat W$ array which stores the columns of the product. It is initially filled with the | |
3999 contents of $x$ with the excess digits zeroed. The reduction loop is very similar the to the baseline loop at heart. The multiplication on step | |
4000 4.1 can be single precision only since $ab \mbox{ (mod }\beta\mbox{)} \equiv (a \mbox{ mod }\beta)(b \mbox{ mod }\beta)$. Some multipliers such | |
4001 as those on the ARM processors take a variable length time to complete depending on the number of bytes of result it must produce. By performing | |
4002 a single precision multiplication instead half the amount of time is spent. | |
4003 | |
4004 Also note that digit $\hat W_{ix}$ must have the carry from the $ix - 1$'th digit propagated upwards in order for this to work. That is what step | |
4005 4.3 will do. In effect over the $n.used$ iterations of the outer loop the $n.used$'th lower columns all have the their carries propagated forwards. Note | |
4006 how the upper bits of those same words are not reduced modulo $\beta$. This is because those values will be discarded shortly and there is no | |
4007 point. | |
4008 | |
4009 Step 5 will propagate the remainder of the carries upwards. On step 6 the columns are reduced modulo $\beta$ and shifted simultaneously as they are | |
4010 stored in the destination $x$. | |
4011 | |
4012 EXAM,bn_fast_mp_montgomery_reduce.c | |
4013 | |
4014 The $\hat W$ array is first filled with digits of $x$ on line @49,for@ then the rest of the digits are zeroed on line @54,for@. Both loops share | |
4015 the same alias variables to make the code easier to read. | |
4016 | |
4017 The value of $\mu$ is calculated in an interesting fashion. First the value $\hat W_{ix}$ is reduced modulo $\beta$ and cast to a mp\_digit. This | |
4018 forces the compiler to use a single precision multiplication and prevents any concerns about loss of precision. Line @101,>>@ fixes the carry | |
4019 for the next iteration of the loop by propagating the carry from $\hat W_{ix}$ to $\hat W_{ix+1}$. | |
4020 | |
4021 The for loop on line @113,for@ propagates the rest of the carries upwards through the columns. The for loop on line @126,for@ reduces the columns | |
4022 modulo $\beta$ and shifts them $k$ places at the same time. The alias $\_ \hat W$ actually refers to the array $\hat W$ starting at the $n.used$'th | |
4023 digit, that is $\_ \hat W_{t} = \hat W_{n.used + t}$. | |
4024 | |
4025 \subsection{Montgomery Setup} | |
4026 To calculate the variable $\rho$ a relatively simple algorithm will be required. | |
4027 | |
4028 \begin{figure}[!here] | |
4029 \begin{small} | |
4030 \begin{center} | |
4031 \begin{tabular}{l} | |
4032 \hline Algorithm \textbf{mp\_montgomery\_setup}. \\ | |
4033 \textbf{Input}. mp\_int $n$ ($n > 1$ and $(n, 2) = 1$) \\ | |
4034 \textbf{Output}. $\rho \equiv -1/n_0 \mbox{ (mod }\beta\mbox{)}$ \\ | |
4035 \hline \\ | |
4036 1. $b \leftarrow n_0$ \\ | |
4037 2. If $b$ is even return(\textit{MP\_VAL}) \\ | |
4038 3. $x \leftarrow ((b + 2) \mbox{ AND } 4) << 1) + b$ \\ | |
4039 4. for $k$ from 0 to $\lceil lg(lg(\beta)) \rceil - 2$ do \\ | |
4040 \hspace{3mm}4.1 $x \leftarrow x \cdot (2 - bx)$ \\ | |
4041 5. $\rho \leftarrow \beta - x \mbox{ (mod }\beta\mbox{)}$ \\ | |
4042 6. Return(\textit{MP\_OKAY}). \\ | |
4043 \hline | |
4044 \end{tabular} | |
4045 \end{center} | |
4046 \end{small} | |
4047 \caption{Algorithm mp\_montgomery\_setup} | |
4048 \end{figure} | |
4049 | |
4050 \textbf{Algorithm mp\_montgomery\_setup.} | |
4051 This algorithm will calculate the value of $\rho$ required within the Montgomery reduction algorithms. It uses a very interesting trick | |
4052 to calculate $1/n_0$ when $\beta$ is a power of two. | |
4053 | |
4054 EXAM,bn_mp_montgomery_setup.c | |
4055 | |
4056 This source code computes the value of $\rho$ required to perform Montgomery reduction. It has been modified to avoid performing excess | |
4057 multiplications when $\beta$ is not the default 28-bits. | |
4058 | |
4059 \section{The Diminished Radix Algorithm} | |
4060 The Diminished Radix method of modular reduction \cite{DRMET} is a fairly clever technique which can be more efficient than either the Barrett | |
4061 or Montgomery methods for certain forms of moduli. The technique is based on the following simple congruence. | |
4062 | |
4063 \begin{equation} | |
4064 (x \mbox{ mod } n) + k \lfloor x / n \rfloor \equiv x \mbox{ (mod }(n - k)\mbox{)} | |
4065 \end{equation} | |
4066 | |
4067 This observation was used in the MMB \cite{MMB} block cipher to create a diffusion primitive. It used the fact that if $n = 2^{31}$ and $k=1$ that | |
4068 then a x86 multiplier could produce the 62-bit product and use the ``shrd'' instruction to perform a double-precision right shift. The proof | |
4069 of the above equation is very simple. First write $x$ in the product form. | |
4070 | |
4071 \begin{equation} | |
4072 x = qn + r | |
4073 \end{equation} | |
4074 | |
4075 Now reduce both sides modulo $(n - k)$. | |
4076 | |
4077 \begin{equation} | |
4078 x \equiv qk + r \mbox{ (mod }(n-k)\mbox{)} | |
4079 \end{equation} | |
4080 | |
4081 The variable $n$ reduces modulo $n - k$ to $k$. By putting $q = \lfloor x/n \rfloor$ and $r = x \mbox{ mod } n$ | |
4082 into the equation the original congruence is reproduced, thus concluding the proof. The following algorithm is based on this observation. | |
4083 | |
4084 \begin{figure}[!here] | |
4085 \begin{small} | |
4086 \begin{center} | |
4087 \begin{tabular}{l} | |
4088 \hline Algorithm \textbf{Diminished Radix Reduction}. \\ | |
4089 \textbf{Input}. Integer $x$, $n$, $k$ \\ | |
4090 \textbf{Output}. $x \mbox{ mod } (n - k)$ \\ | |
4091 \hline \\ | |
4092 1. $q \leftarrow \lfloor x / n \rfloor$ \\ | |
4093 2. $q \leftarrow k \cdot q$ \\ | |
4094 3. $x \leftarrow x \mbox{ (mod }n\mbox{)}$ \\ | |
4095 4. $x \leftarrow x + q$ \\ | |
4096 5. If $x \ge (n - k)$ then \\ | |
4097 \hspace{3mm}5.1 $x \leftarrow x - (n - k)$ \\ | |
4098 \hspace{3mm}5.2 Goto step 1. \\ | |
4099 6. Return $x$ \\ | |
4100 \hline | |
4101 \end{tabular} | |
4102 \end{center} | |
4103 \end{small} | |
4104 \caption{Algorithm Diminished Radix Reduction} | |
4105 \label{fig:DR} | |
4106 \end{figure} | |
4107 | |
4108 This algorithm will reduce $x$ modulo $n - k$ and return the residue. If $0 \le x < (n - k)^2$ then the algorithm will loop almost always | |
4109 once or twice and occasionally three times. For simplicity sake the value of $x$ is bounded by the following simple polynomial. | |
4110 | |
4111 \begin{equation} | |
4112 0 \le x < n^2 + k^2 - 2nk | |
4113 \end{equation} | |
4114 | |
4115 The true bound is $0 \le x < (n - k - 1)^2$ but this has quite a few more terms. The value of $q$ after step 1 is bounded by the following. | |
4116 | |
4117 \begin{equation} | |
4118 q < n - 2k - k^2/n | |
4119 \end{equation} | |
4120 | |
4121 Since $k^2$ is going to be considerably smaller than $n$ that term will always be zero. The value of $x$ after step 3 is bounded trivially as | |
4122 $0 \le x < n$. By step four the sum $x + q$ is bounded by | |
4123 | |
4124 \begin{equation} | |
4125 0 \le q + x < (k + 1)n - 2k^2 - 1 | |
4126 \end{equation} | |
4127 | |
4128 With a second pass $q$ will be loosely bounded by $0 \le q < k^2$ after step 2 while $x$ will still be loosely bounded by $0 \le x < n$ after step 3. After the second pass it is highly unlike that the | |
4129 sum in step 4 will exceed $n - k$. In practice fewer than three passes of the algorithm are required to reduce virtually every input in the | |
4130 range $0 \le x < (n - k - 1)^2$. | |
4131 | |
4132 \begin{figure} | |
4133 \begin{small} | |
4134 \begin{center} | |
4135 \begin{tabular}{|l|} | |
4136 \hline | |
4137 $x = 123456789, n = 256, k = 3$ \\ | |
4138 \hline $q \leftarrow \lfloor x/n \rfloor = 482253$ \\ | |
4139 $q \leftarrow q*k = 1446759$ \\ | |
4140 $x \leftarrow x \mbox{ mod } n = 21$ \\ | |
4141 $x \leftarrow x + q = 1446780$ \\ | |
4142 $x \leftarrow x - (n - k) = 1446527$ \\ | |
4143 \hline | |
4144 $q \leftarrow \lfloor x/n \rfloor = 5650$ \\ | |
4145 $q \leftarrow q*k = 16950$ \\ | |
4146 $x \leftarrow x \mbox{ mod } n = 127$ \\ | |
4147 $x \leftarrow x + q = 17077$ \\ | |
4148 $x \leftarrow x - (n - k) = 16824$ \\ | |
4149 \hline | |
4150 $q \leftarrow \lfloor x/n \rfloor = 65$ \\ | |
4151 $q \leftarrow q*k = 195$ \\ | |
4152 $x \leftarrow x \mbox{ mod } n = 184$ \\ | |
4153 $x \leftarrow x + q = 379$ \\ | |
4154 $x \leftarrow x - (n - k) = 126$ \\ | |
4155 \hline | |
4156 \end{tabular} | |
4157 \end{center} | |
4158 \end{small} | |
4159 \caption{Example Diminished Radix Reduction} | |
4160 \label{fig:EXDR} | |
4161 \end{figure} | |
4162 | |
4163 Figure~\ref{fig:EXDR} demonstrates the reduction of $x = 123456789$ modulo $n - k = 253$ when $n = 256$ and $k = 3$. Note that even while $x$ | |
4164 is considerably larger than $(n - k - 1)^2 = 63504$ the algorithm still converges on the modular residue exceedingly fast. In this case only | |
4165 three passes were required to find the residue $x \equiv 126$. | |
4166 | |
4167 | |
4168 \subsection{Choice of Moduli} | |
4169 On the surface this algorithm looks like a very expensive algorithm. It requires a couple of subtractions followed by multiplication and other | |
4170 modular reductions. The usefulness of this algorithm becomes exceedingly clear when an appropriate modulus is chosen. | |
4171 | |
4172 Division in general is a very expensive operation to perform. The one exception is when the division is by a power of the radix of representation used. | |
4173 Division by ten for example is simple for pencil and paper mathematics since it amounts to shifting the decimal place to the right. Similarly division | |
4174 by two (\textit{or powers of two}) is very simple for binary computers to perform. It would therefore seem logical to choose $n$ of the form $2^p$ | |
4175 which would imply that $\lfloor x / n \rfloor$ is a simple shift of $x$ right $p$ bits. | |
4176 | |
4177 However, there is one operation related to division of power of twos that is even faster than this. If $n = \beta^p$ then the division may be | |
4178 performed by moving whole digits to the right $p$ places. In practice division by $\beta^p$ is much faster than division by $2^p$ for any $p$. | |
4179 Also with the choice of $n = \beta^p$ reducing $x$ modulo $n$ merely requires zeroing the digits above the $p-1$'th digit of $x$. | |
4180 | |
4181 Throughout the next section the term ``restricted modulus'' will refer to a modulus of the form $\beta^p - k$ whereas the term ``unrestricted | |
4182 modulus'' will refer to a modulus of the form $2^p - k$. The word ``restricted'' in this case refers to the fact that it is based on the | |
4183 $2^p$ logic except $p$ must be a multiple of $lg(\beta)$. | |
4184 | |
4185 \subsection{Choice of $k$} | |
4186 Now that division and reduction (\textit{step 1 and 3 of figure~\ref{fig:DR}}) have been optimized to simple digit operations the multiplication by $k$ | |
4187 in step 2 is the most expensive operation. Fortunately the choice of $k$ is not terribly limited. For all intents and purposes it might | |
4188 as well be a single digit. The smaller the value of $k$ is the faster the algorithm will be. | |
4189 | |
4190 \subsection{Restricted Diminished Radix Reduction} | |
4191 The restricted Diminished Radix algorithm can quickly reduce an input modulo a modulus of the form $n = \beta^p - k$. This algorithm can reduce | |
4192 an input $x$ within the range $0 \le x < n^2$ using only a couple passes of the algorithm demonstrated in figure~\ref{fig:DR}. The implementation | |
4193 of this algorithm has been optimized to avoid additional overhead associated with a division by $\beta^p$, the multiplication by $k$ or the addition | |
4194 of $x$ and $q$. The resulting algorithm is very efficient and can lead to substantial improvements over Barrett and Montgomery reduction when modular | |
4195 exponentiations are performed. | |
4196 | |
4197 \newpage\begin{figure}[!here] | |
4198 \begin{small} | |
4199 \begin{center} | |
4200 \begin{tabular}{l} | |
4201 \hline Algorithm \textbf{mp\_dr\_reduce}. \\ | |
4202 \textbf{Input}. mp\_int $x$, $n$ and a mp\_digit $k = \beta - n_0$ \\ | |
4203 \hspace{11.5mm}($0 \le x < n^2$, $n > 1$, $0 < k < \beta$) \\ | |
4204 \textbf{Output}. $x \mbox{ mod } n$ \\ | |
4205 \hline \\ | |
4206 1. $m \leftarrow n.used$ \\ | |
4207 2. If $x.alloc < 2m$ then grow $x$ to $2m$ digits. \\ | |
4208 3. $\mu \leftarrow 0$ \\ | |
4209 4. for $i$ from $0$ to $m - 1$ do \\ | |
4210 \hspace{3mm}4.1 $\hat r \leftarrow k \cdot x_{m+i} + x_{i} + \mu$ \\ | |
4211 \hspace{3mm}4.2 $x_{i} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
4212 \hspace{3mm}4.3 $\mu \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
4213 5. $x_{m} \leftarrow \mu$ \\ | |
4214 6. for $i$ from $m + 1$ to $x.used - 1$ do \\ | |
4215 \hspace{3mm}6.1 $x_{i} \leftarrow 0$ \\ | |
4216 7. Clamp excess digits of $x$. \\ | |
4217 8. If $x \ge n$ then \\ | |
4218 \hspace{3mm}8.1 $x \leftarrow x - n$ \\ | |
4219 \hspace{3mm}8.2 Goto step 3. \\ | |
4220 9. Return(\textit{MP\_OKAY}). \\ | |
4221 \hline | |
4222 \end{tabular} | |
4223 \end{center} | |
4224 \end{small} | |
4225 \caption{Algorithm mp\_dr\_reduce} | |
4226 \end{figure} | |
4227 | |
4228 \textbf{Algorithm mp\_dr\_reduce.} | |
4229 This algorithm will perform the Dimished Radix reduction of $x$ modulo $n$. It has similar restrictions to that of the Barrett reduction | |
4230 with the addition that $n$ must be of the form $n = \beta^m - k$ where $0 < k <\beta$. | |
4231 | |
4232 This algorithm essentially implements the pseudo-code in figure~\ref{fig:DR} except with a slight optimization. The division by $\beta^m$, multiplication by $k$ | |
4233 and addition of $x \mbox{ mod }\beta^m$ are all performed simultaneously inside the loop on step 4. The division by $\beta^m$ is emulated by accessing | |
4234 the term at the $m+i$'th position which is subsequently multiplied by $k$ and added to the term at the $i$'th position. After the loop the $m$'th | |
4235 digit is set to the carry and the upper digits are zeroed. Steps 5 and 6 emulate the reduction modulo $\beta^m$ that should have happend to | |
4236 $x$ before the addition of the multiple of the upper half. | |
4237 | |
4238 At step 8 if $x$ is still larger than $n$ another pass of the algorithm is required. First $n$ is subtracted from $x$ and then the algorithm resumes | |
4239 at step 3. | |
4240 | |
4241 EXAM,bn_mp_dr_reduce.c | |
4242 | |
4243 The first step is to grow $x$ as required to $2m$ digits since the reduction is performed in place on $x$. The label on line @49,top:@ is where | |
4244 the algorithm will resume if further reduction passes are required. In theory it could be placed at the top of the function however, the size of | |
4245 the modulus and question of whether $x$ is large enough are invariant after the first pass meaning that it would be a waste of time. | |
4246 | |
4247 The aliases $tmpx1$ and $tmpx2$ refer to the digits of $x$ where the latter is offset by $m$ digits. By reading digits from $x$ offset by $m$ digits | |
4248 a division by $\beta^m$ can be simulated virtually for free. The loop on line @61,for@ performs the bulk of the work (\textit{corresponds to step 4 of algorithm 7.11}) | |
4249 in this algorithm. | |
4250 | |
4251 By line @68,mu@ the pointer $tmpx1$ points to the $m$'th digit of $x$ which is where the final carry will be placed. Similarly by line @71,for@ the | |
4252 same pointer will point to the $m+1$'th digit where the zeroes will be placed. | |
4253 | |
4254 Since the algorithm is only valid if both $x$ and $n$ are greater than zero an unsigned comparison suffices to determine if another pass is required. | |
4255 With the same logic at line @82,sub@ the value of $x$ is known to be greater than or equal to $n$ meaning that an unsigned subtraction can be used | |
4256 as well. Since the destination of the subtraction is the larger of the inputs the call to algorithm s\_mp\_sub cannot fail and the return code | |
4257 does not need to be checked. | |
4258 | |
4259 \subsubsection{Setup} | |
4260 To setup the restricted Diminished Radix algorithm the value $k = \beta - n_0$ is required. This algorithm is not really complicated but provided for | |
4261 completeness. | |
4262 | |
4263 \begin{figure}[!here] | |
4264 \begin{small} | |
4265 \begin{center} | |
4266 \begin{tabular}{l} | |
4267 \hline Algorithm \textbf{mp\_dr\_setup}. \\ | |
4268 \textbf{Input}. mp\_int $n$ \\ | |
4269 \textbf{Output}. $k = \beta - n_0$ \\ | |
4270 \hline \\ | |
4271 1. $k \leftarrow \beta - n_0$ \\ | |
4272 \hline | |
4273 \end{tabular} | |
4274 \end{center} | |
4275 \end{small} | |
4276 \caption{Algorithm mp\_dr\_setup} | |
4277 \end{figure} | |
4278 | |
4279 EXAM,bn_mp_dr_setup.c | |
4280 | |
4281 \subsubsection{Modulus Detection} | |
4282 Another algorithm which will be useful is the ability to detect a restricted Diminished Radix modulus. An integer is said to be | |
4283 of restricted Diminished Radix form if all of the digits are equal to $\beta - 1$ except the trailing digit which may be any value. | |
4284 | |
4285 \begin{figure}[!here] | |
4286 \begin{small} | |
4287 \begin{center} | |
4288 \begin{tabular}{l} | |
4289 \hline Algorithm \textbf{mp\_dr\_is\_modulus}. \\ | |
4290 \textbf{Input}. mp\_int $n$ \\ | |
4291 \textbf{Output}. $1$ if $n$ is in D.R form, $0$ otherwise \\ | |
4292 \hline | |
4293 1. If $n.used < 2$ then return($0$). \\ | |
4294 2. for $ix$ from $1$ to $n.used - 1$ do \\ | |
4295 \hspace{3mm}2.1 If $n_{ix} \ne \beta - 1$ return($0$). \\ | |
4296 3. Return($1$). \\ | |
4297 \hline | |
4298 \end{tabular} | |
4299 \end{center} | |
4300 \end{small} | |
4301 \caption{Algorithm mp\_dr\_is\_modulus} | |
4302 \end{figure} | |
4303 | |
4304 \textbf{Algorithm mp\_dr\_is\_modulus.} | |
4305 This algorithm determines if a value is in Diminished Radix form. Step 1 rejects obvious cases where fewer than two digits are | |
4306 in the mp\_int. Step 2 tests all but the first digit to see if they are equal to $\beta - 1$. If the algorithm manages to get to | |
4307 step 3 then $n$ must be of Diminished Radix form. | |
4308 | |
4309 EXAM,bn_mp_dr_is_modulus.c | |
4310 | |
4311 \subsection{Unrestricted Diminished Radix Reduction} | |
4312 The unrestricted Diminished Radix algorithm allows modular reductions to be performed when the modulus is of the form $2^p - k$. This algorithm | |
4313 is a straightforward adaptation of algorithm~\ref{fig:DR}. | |
4314 | |
4315 In general the restricted Diminished Radix reduction algorithm is much faster since it has considerably lower overhead. However, this new | |
4316 algorithm is much faster than either Montgomery or Barrett reduction when the moduli are of the appropriate form. | |
4317 | |
4318 \begin{figure}[!here] | |
4319 \begin{small} | |
4320 \begin{center} | |
4321 \begin{tabular}{l} | |
4322 \hline Algorithm \textbf{mp\_reduce\_2k}. \\ | |
4323 \textbf{Input}. mp\_int $a$ and $n$. mp\_digit $k$ \\ | |
4324 \hspace{11.5mm}($a \ge 0$, $n > 1$, $0 < k < \beta$, $n + k$ is a power of two) \\ | |
4325 \textbf{Output}. $a \mbox{ (mod }n\mbox{)}$ \\ | |
4326 \hline | |
4327 1. $p \leftarrow \lceil lg(n) \rceil$ (\textit{mp\_count\_bits}) \\ | |
4328 2. While $a \ge n$ do \\ | |
4329 \hspace{3mm}2.1 $q \leftarrow \lfloor a / 2^p \rfloor$ (\textit{mp\_div\_2d}) \\ | |
4330 \hspace{3mm}2.2 $a \leftarrow a \mbox{ (mod }2^p\mbox{)}$ (\textit{mp\_mod\_2d}) \\ | |
4331 \hspace{3mm}2.3 $q \leftarrow q \cdot k$ (\textit{mp\_mul\_d}) \\ | |
4332 \hspace{3mm}2.4 $a \leftarrow a - q$ (\textit{s\_mp\_sub}) \\ | |
4333 \hspace{3mm}2.5 If $a \ge n$ then do \\ | |
4334 \hspace{6mm}2.5.1 $a \leftarrow a - n$ \\ | |
4335 3. Return(\textit{MP\_OKAY}). \\ | |
4336 \hline | |
4337 \end{tabular} | |
4338 \end{center} | |
4339 \end{small} | |
4340 \caption{Algorithm mp\_reduce\_2k} | |
4341 \end{figure} | |
4342 | |
4343 \textbf{Algorithm mp\_reduce\_2k.} | |
4344 This algorithm quickly reduces an input $a$ modulo an unrestricted Diminished Radix modulus $n$. Division by $2^p$ is emulated with a right | |
4345 shift which makes the algorithm fairly inexpensive to use. | |
4346 | |
4347 EXAM,bn_mp_reduce_2k.c | |
4348 | |
4349 The algorithm mp\_count\_bits calculates the number of bits in an mp\_int which is used to find the initial value of $p$. The call to mp\_div\_2d | |
4350 on line @31,mp_div_2d@ calculates both the quotient $q$ and the remainder $a$ required. By doing both in a single function call the code size | |
4351 is kept fairly small. The multiplication by $k$ is only performed if $k > 1$. This allows reductions modulo $2^p - 1$ to be performed without | |
4352 any multiplications. | |
4353 | |
4354 The unsigned s\_mp\_add, mp\_cmp\_mag and s\_mp\_sub are used in place of their full sign counterparts since the inputs are only valid if they are | |
4355 positive. By using the unsigned versions the overhead is kept to a minimum. | |
4356 | |
4357 \subsubsection{Unrestricted Setup} | |
4358 To setup this reduction algorithm the value of $k = 2^p - n$ is required. | |
4359 | |
4360 \begin{figure}[!here] | |
4361 \begin{small} | |
4362 \begin{center} | |
4363 \begin{tabular}{l} | |
4364 \hline Algorithm \textbf{mp\_reduce\_2k\_setup}. \\ | |
4365 \textbf{Input}. mp\_int $n$ \\ | |
4366 \textbf{Output}. $k = 2^p - n$ \\ | |
4367 \hline | |
4368 1. $p \leftarrow \lceil lg(n) \rceil$ (\textit{mp\_count\_bits}) \\ | |
4369 2. $x \leftarrow 2^p$ (\textit{mp\_2expt}) \\ | |
4370 3. $x \leftarrow x - n$ (\textit{mp\_sub}) \\ | |
4371 4. $k \leftarrow x_0$ \\ | |
4372 5. Return(\textit{MP\_OKAY}). \\ | |
4373 \hline | |
4374 \end{tabular} | |
4375 \end{center} | |
4376 \end{small} | |
4377 \caption{Algorithm mp\_reduce\_2k\_setup} | |
4378 \end{figure} | |
4379 | |
4380 \textbf{Algorithm mp\_reduce\_2k\_setup.} | |
4381 This algorithm computes the value of $k$ required for the algorithm mp\_reduce\_2k. By making a temporary variable $x$ equal to $2^p$ a subtraction | |
4382 is sufficient to solve for $k$. Alternatively if $n$ has more than one digit the value of $k$ is simply $\beta - n_0$. | |
4383 | |
4384 EXAM,bn_mp_reduce_2k_setup.c | |
4385 | |
4386 \subsubsection{Unrestricted Detection} | |
4387 An integer $n$ is a valid unrestricted Diminished Radix modulus if either of the following are true. | |
4388 | |
4389 \begin{enumerate} | |
4390 \item The number has only one digit. | |
4391 \item The number has more than one digit and every bit from the $\beta$'th to the most significant is one. | |
4392 \end{enumerate} | |
4393 | |
4394 If either condition is true than there is a power of two $2^p$ such that $0 < 2^p - n < \beta$. If the input is only | |
4395 one digit than it will always be of the correct form. Otherwise all of the bits above the first digit must be one. This arises from the fact | |
4396 that there will be value of $k$ that when added to the modulus causes a carry in the first digit which propagates all the way to the most | |
4397 significant bit. The resulting sum will be a power of two. | |
4398 | |
4399 \begin{figure}[!here] | |
4400 \begin{small} | |
4401 \begin{center} | |
4402 \begin{tabular}{l} | |
4403 \hline Algorithm \textbf{mp\_reduce\_is\_2k}. \\ | |
4404 \textbf{Input}. mp\_int $n$ \\ | |
4405 \textbf{Output}. $1$ if of proper form, $0$ otherwise \\ | |
4406 \hline | |
4407 1. If $n.used = 0$ then return($0$). \\ | |
4408 2. If $n.used = 1$ then return($1$). \\ | |
4409 3. $p \leftarrow \lceil lg(n) \rceil$ (\textit{mp\_count\_bits}) \\ | |
4410 4. for $x$ from $lg(\beta)$ to $p$ do \\ | |
4411 \hspace{3mm}4.1 If the ($x \mbox{ mod }lg(\beta)$)'th bit of the $\lfloor x / lg(\beta) \rfloor$ of $n$ is zero then return($0$). \\ | |
4412 5. Return($1$). \\ | |
4413 \hline | |
4414 \end{tabular} | |
4415 \end{center} | |
4416 \end{small} | |
4417 \caption{Algorithm mp\_reduce\_is\_2k} | |
4418 \end{figure} | |
4419 | |
4420 \textbf{Algorithm mp\_reduce\_is\_2k.} | |
4421 This algorithm quickly determines if a modulus is of the form required for algorithm mp\_reduce\_2k to function properly. | |
4422 | |
4423 EXAM,bn_mp_reduce_is_2k.c | |
4424 | |
4425 | |
4426 | |
4427 \section{Algorithm Comparison} | |
4428 So far three very different algorithms for modular reduction have been discussed. Each of the algorithms have their own strengths and weaknesses | |
4429 that makes having such a selection very useful. The following table sumarizes the three algorithms along with comparisons of work factors. Since | |
4430 all three algorithms have the restriction that $0 \le x < n^2$ and $n > 1$ those limitations are not included in the table. | |
4431 | |
4432 \begin{center} | |
4433 \begin{small} | |
4434 \begin{tabular}{|c|c|c|c|c|c|} | |
4435 \hline \textbf{Method} & \textbf{Work Required} & \textbf{Limitations} & \textbf{$m = 8$} & \textbf{$m = 32$} & \textbf{$m = 64$} \\ | |
4436 \hline Barrett & $m^2 + 2m - 1$ & None & $79$ & $1087$ & $4223$ \\ | |
4437 \hline Montgomery & $m^2 + m$ & $n$ must be odd & $72$ & $1056$ & $4160$ \\ | |
4438 \hline D.R. & $2m$ & $n = \beta^m - k$ & $16$ & $64$ & $128$ \\ | |
4439 \hline | |
4440 \end{tabular} | |
4441 \end{small} | |
4442 \end{center} | |
4443 | |
4444 In theory Montgomery and Barrett reductions would require roughly the same amount of time to complete. However, in practice since Montgomery | |
4445 reduction can be written as a single function with the Comba technique it is much faster. Barrett reduction suffers from the overhead of | |
4446 calling the half precision multipliers, addition and division by $\beta$ algorithms. | |
4447 | |
4448 For almost every cryptographic algorithm Montgomery reduction is the algorithm of choice. The one set of algorithms where Diminished Radix reduction truly | |
4449 shines are based on the discrete logarithm problem such as Diffie-Hellman \cite{DH} and ElGamal \cite{ELGAMAL}. In these algorithms | |
4450 primes of the form $\beta^m - k$ can be found and shared amongst users. These primes will allow the Diminished Radix algorithm to be used in | |
4451 modular exponentiation to greatly speed up the operation. | |
4452 | |
4453 | |
4454 | |
4455 \section*{Exercises} | |
4456 \begin{tabular}{cl} | |
4457 $\left [ 3 \right ]$ & Prove that the ``trick'' in algorithm mp\_montgomery\_setup actually \\ | |
4458 & calculates the correct value of $\rho$. \\ | |
4459 & \\ | |
4460 $\left [ 2 \right ]$ & Devise an algorithm to reduce modulo $n + k$ for small $k$ quickly. \\ | |
4461 & \\ | |
4462 $\left [ 4 \right ]$ & Prove that the pseudo-code algorithm ``Diminished Radix Reduction'' \\ | |
4463 & (\textit{figure~\ref{fig:DR}}) terminates. Also prove the probability that it will \\ | |
4464 & terminate within $1 \le k \le 10$ iterations. \\ | |
4465 & \\ | |
4466 \end{tabular} | |
4467 | |
4468 | |
4469 \chapter{Exponentiation} | |
4470 Exponentiation is the operation of raising one variable to the power of another, for example, $a^b$. A variant of exponentiation, computed | |
4471 in a finite field or ring, is called modular exponentiation. This latter style of operation is typically used in public key | |
4472 cryptosystems such as RSA and Diffie-Hellman. The ability to quickly compute modular exponentiations is of great benefit to any | |
4473 such cryptosystem and many methods have been sought to speed it up. | |
4474 | |
4475 \section{Exponentiation Basics} | |
4476 A trivial algorithm would simply multiply $a$ against itself $b - 1$ times to compute the exponentiation desired. However, as $b$ grows in size | |
4477 the number of multiplications becomes prohibitive. Imagine what would happen if $b$ $\approx$ $2^{1024}$ as is the case when computing an RSA signature | |
4478 with a $1024$-bit key. Such a calculation could never be completed as it would take simply far too long. | |
4479 | |
4480 Fortunately there is a very simple algorithm based on the laws of exponents. Recall that $lg_a(a^b) = b$ and that $lg_a(a^ba^c) = b + c$ which | |
4481 are two trivial relationships between the base and the exponent. Let $b_i$ represent the $i$'th bit of $b$ starting from the least | |
4482 significant bit. If $b$ is a $k$-bit integer than the following equation is true. | |
4483 | |
4484 \begin{equation} | |
4485 a^b = \prod_{i=0}^{k-1} a^{2^i \cdot b_i} | |
4486 \end{equation} | |
4487 | |
4488 By taking the base $a$ logarithm of both sides of the equation the following equation is the result. | |
4489 | |
4490 \begin{equation} | |
4491 b = \sum_{i=0}^{k-1}2^i \cdot b_i | |
4492 \end{equation} | |
4493 | |
4494 The term $a^{2^i}$ can be found from the $i - 1$'th term by squaring the term since $\left ( a^{2^i} \right )^2$ is equal to | |
4495 $a^{2^{i+1}}$. This observation forms the basis of essentially all fast exponentiation algorithms. It requires $k$ squarings and on average | |
4496 $k \over 2$ multiplications to compute the result. This is indeed quite an improvement over simply multiplying by $a$ a total of $b-1$ times. | |
4497 | |
4498 While this current method is a considerable speed up there are further improvements to be made. For example, the $a^{2^i}$ term does not need to | |
4499 be computed in an auxilary variable. Consider the following equivalent algorithm. | |
4500 | |
4501 \begin{figure}[!here] | |
4502 \begin{small} | |
4503 \begin{center} | |
4504 \begin{tabular}{l} | |
4505 \hline Algorithm \textbf{Left to Right Exponentiation}. \\ | |
4506 \textbf{Input}. Integer $a$, $b$ and $k$ \\ | |
4507 \textbf{Output}. $c = a^b$ \\ | |
4508 \hline \\ | |
4509 1. $c \leftarrow 1$ \\ | |
4510 2. for $i$ from $k - 1$ to $0$ do \\ | |
4511 \hspace{3mm}2.1 $c \leftarrow c^2$ \\ | |
4512 \hspace{3mm}2.2 $c \leftarrow c \cdot a^{b_i}$ \\ | |
4513 3. Return $c$. \\ | |
4514 \hline | |
4515 \end{tabular} | |
4516 \end{center} | |
4517 \end{small} | |
4518 \caption{Left to Right Exponentiation} | |
4519 \label{fig:LTOR} | |
4520 \end{figure} | |
4521 | |
4522 This algorithm starts from the most significant bit and works towards the least significant bit. When the $i$'th bit of $b$ is set $a$ is | |
4523 multiplied against the current product. In each iteration the product is squared which doubles the exponent of the individual terms of the | |
4524 product. | |
4525 | |
4526 For example, let $b = 101100_2 \equiv 44_{10}$. The following chart demonstrates the actions of the algorithm. | |
4527 | |
4528 \newpage\begin{figure} | |
4529 \begin{center} | |
4530 \begin{tabular}{|c|c|} | |
4531 \hline \textbf{Value of $i$} & \textbf{Value of $c$} \\ | |
4532 \hline - & $1$ \\ | |
4533 \hline $5$ & $a$ \\ | |
4534 \hline $4$ & $a^2$ \\ | |
4535 \hline $3$ & $a^4 \cdot a$ \\ | |
4536 \hline $2$ & $a^8 \cdot a^2 \cdot a$ \\ | |
4537 \hline $1$ & $a^{16} \cdot a^4 \cdot a^2$ \\ | |
4538 \hline $0$ & $a^{32} \cdot a^8 \cdot a^4$ \\ | |
4539 \hline | |
4540 \end{tabular} | |
4541 \end{center} | |
4542 \caption{Example of Left to Right Exponentiation} | |
4543 \end{figure} | |
4544 | |
4545 When the product $a^{32} \cdot a^8 \cdot a^4$ is simplified it is equal $a^{44}$ which is the desired exponentiation. This particular algorithm is | |
4546 called ``Left to Right'' because it reads the exponent in that order. All of the exponentiation algorithms that will be presented are of this nature. | |
4547 | |
4548 \subsection{Single Digit Exponentiation} | |
4549 The first algorithm in the series of exponentiation algorithms will be an unbounded algorithm where the exponent is a single digit. It is intended | |
4550 to be used when a small power of an input is required (\textit{e.g. $a^5$}). It is faster than simply multiplying $b - 1$ times for all values of | |
4551 $b$ that are greater than three. | |
4552 | |
4553 \newpage\begin{figure}[!here] | |
4554 \begin{small} | |
4555 \begin{center} | |
4556 \begin{tabular}{l} | |
4557 \hline Algorithm \textbf{mp\_expt\_d}. \\ | |
4558 \textbf{Input}. mp\_int $a$ and mp\_digit $b$ \\ | |
4559 \textbf{Output}. $c = a^b$ \\ | |
4560 \hline \\ | |
4561 1. $g \leftarrow a$ (\textit{mp\_init\_copy}) \\ | |
4562 2. $c \leftarrow 1$ (\textit{mp\_set}) \\ | |
4563 3. for $x$ from 1 to $lg(\beta)$ do \\ | |
4564 \hspace{3mm}3.1 $c \leftarrow c^2$ (\textit{mp\_sqr}) \\ | |
4565 \hspace{3mm}3.2 If $b$ AND $2^{lg(\beta) - 1} \ne 0$ then \\ | |
4566 \hspace{6mm}3.2.1 $c \leftarrow c \cdot g$ (\textit{mp\_mul}) \\ | |
4567 \hspace{3mm}3.3 $b \leftarrow b << 1$ \\ | |
4568 4. Clear $g$. \\ | |
4569 5. Return(\textit{MP\_OKAY}). \\ | |
4570 \hline | |
4571 \end{tabular} | |
4572 \end{center} | |
4573 \end{small} | |
4574 \caption{Algorithm mp\_expt\_d} | |
4575 \end{figure} | |
4576 | |
4577 \textbf{Algorithm mp\_expt\_d.} | |
4578 This algorithm computes the value of $a$ raised to the power of a single digit $b$. It uses the left to right exponentiation algorithm to | |
4579 quickly compute the exponentiation. It is loosely based on algorithm 14.79 of HAC \cite[pp. 615]{HAC} with the difference that the | |
4580 exponent is a fixed width. | |
4581 | |
4582 A copy of $a$ is made first to allow destination variable $c$ be the same as the source variable $a$. The result is set to the initial value of | |
4583 $1$ in the subsequent step. | |
4584 | |
4585 Inside the loop the exponent is read from the most significant bit first down to the least significant bit. First $c$ is invariably squared | |
4586 on step 3.1. In the following step if the most significant bit of $b$ is one the copy of $a$ is multiplied against $c$. The value | |
4587 of $b$ is shifted left one bit to make the next bit down from the most signficant bit the new most significant bit. In effect each | |
4588 iteration of the loop moves the bits of the exponent $b$ upwards to the most significant location. | |
4589 | |
4590 EXAM,bn_mp_expt_d.c | |
4591 | |
4592 Line @29,mp_set@ sets the initial value of the result to $1$. Next the loop on line @31,for@ steps through each bit of the exponent starting from | |
4593 the most significant down towards the least significant. The invariant squaring operation placed on line @333,mp_sqr@ is performed first. After | |
4594 the squaring the result $c$ is multiplied by the base $g$ if and only if the most significant bit of the exponent is set. The shift on line | |
4595 @47,<<@ moves all of the bits of the exponent upwards towards the most significant location. | |
4596 | |
4597 \section{$k$-ary Exponentiation} | |
4598 When calculating an exponentiation the most time consuming bottleneck is the multiplications which are in general a small factor | |
4599 slower than squaring. Recall from the previous algorithm that $b_{i}$ refers to the $i$'th bit of the exponent $b$. Suppose instead it referred to | |
4600 the $i$'th $k$-bit digit of the exponent of $b$. For $k = 1$ the definitions are synonymous and for $k > 1$ algorithm~\ref{fig:KARY} | |
4601 computes the same exponentiation. A group of $k$ bits from the exponent is called a \textit{window}. That is it is a small window on only a | |
4602 portion of the entire exponent. Consider the following modification to the basic left to right exponentiation algorithm. | |
4603 | |
4604 \begin{figure}[!here] | |
4605 \begin{small} | |
4606 \begin{center} | |
4607 \begin{tabular}{l} | |
4608 \hline Algorithm \textbf{$k$-ary Exponentiation}. \\ | |
4609 \textbf{Input}. Integer $a$, $b$, $k$ and $t$ \\ | |
4610 \textbf{Output}. $c = a^b$ \\ | |
4611 \hline \\ | |
4612 1. $c \leftarrow 1$ \\ | |
4613 2. for $i$ from $t - 1$ to $0$ do \\ | |
4614 \hspace{3mm}2.1 $c \leftarrow c^{2^k} $ \\ | |
4615 \hspace{3mm}2.2 Extract the $i$'th $k$-bit word from $b$ and store it in $g$. \\ | |
4616 \hspace{3mm}2.3 $c \leftarrow c \cdot a^g$ \\ | |
4617 3. Return $c$. \\ | |
4618 \hline | |
4619 \end{tabular} | |
4620 \end{center} | |
4621 \end{small} | |
4622 \caption{$k$-ary Exponentiation} | |
4623 \label{fig:KARY} | |
4624 \end{figure} | |
4625 | |
4626 The squaring on step 2.1 can be calculated by squaring the value $c$ successively $k$ times. If the values of $a^g$ for $0 < g < 2^k$ have been | |
4627 precomputed this algorithm requires only $t$ multiplications and $tk$ squarings. The table can be generated with $2^{k - 1} - 1$ squarings and | |
4628 $2^{k - 1} + 1$ multiplications. This algorithm assumes that the number of bits in the exponent is evenly divisible by $k$. | |
4629 However, when it is not the remaining $0 < x \le k - 1$ bits can be handled with algorithm~\ref{fig:LTOR}. | |
4630 | |
4631 Suppose $k = 4$ and $t = 100$. This modified algorithm will require $109$ multiplications and $408$ squarings to compute the exponentiation. The | |
4632 original algorithm would on average have required $200$ multiplications and $400$ squrings to compute the same value. The total number of squarings | |
4633 has increased slightly but the number of multiplications has nearly halved. | |
4634 | |
4635 \subsection{Optimal Values of $k$} | |
4636 An optimal value of $k$ will minimize $2^{k} + \lceil n / k \rceil + n - 1$ for a fixed number of bits in the exponent $n$. The simplest | |
4637 approach is to brute force search amongst the values $k = 2, 3, \ldots, 8$ for the lowest result. Table~\ref{fig:OPTK} lists optimal values of $k$ | |
4638 for various exponent sizes and compares the number of multiplication and squarings required against algorithm~\ref{fig:LTOR}. | |
4639 | |
4640 \begin{figure}[here] | |
4641 \begin{center} | |
4642 \begin{small} | |
4643 \begin{tabular}{|c|c|c|c|c|c|} | |
4644 \hline \textbf{Exponent (bits)} & \textbf{Optimal $k$} & \textbf{Work at $k$} & \textbf{Work with ~\ref{fig:LTOR}} \\ | |
4645 \hline $16$ & $2$ & $27$ & $24$ \\ | |
4646 \hline $32$ & $3$ & $49$ & $48$ \\ | |
4647 \hline $64$ & $3$ & $92$ & $96$ \\ | |
4648 \hline $128$ & $4$ & $175$ & $192$ \\ | |
4649 \hline $256$ & $4$ & $335$ & $384$ \\ | |
4650 \hline $512$ & $5$ & $645$ & $768$ \\ | |
4651 \hline $1024$ & $6$ & $1257$ & $1536$ \\ | |
4652 \hline $2048$ & $6$ & $2452$ & $3072$ \\ | |
4653 \hline $4096$ & $7$ & $4808$ & $6144$ \\ | |
4654 \hline | |
4655 \end{tabular} | |
4656 \end{small} | |
4657 \end{center} | |
4658 \caption{Optimal Values of $k$ for $k$-ary Exponentiation} | |
4659 \label{fig:OPTK} | |
4660 \end{figure} | |
4661 | |
4662 \subsection{Sliding-Window Exponentiation} | |
4663 A simple modification to the previous algorithm is only generate the upper half of the table in the range $2^{k-1} \le g < 2^k$. Essentially | |
4664 this is a table for all values of $g$ where the most significant bit of $g$ is a one. However, in order for this to be allowed in the | |
4665 algorithm values of $g$ in the range $0 \le g < 2^{k-1}$ must be avoided. | |
4666 | |
4667 Table~\ref{fig:OPTK2} lists optimal values of $k$ for various exponent sizes and compares the work required against algorithm~\ref{fig:KARY}. | |
4668 | |
4669 \begin{figure}[here] | |
4670 \begin{center} | |
4671 \begin{small} | |
4672 \begin{tabular}{|c|c|c|c|c|c|} | |
4673 \hline \textbf{Exponent (bits)} & \textbf{Optimal $k$} & \textbf{Work at $k$} & \textbf{Work with ~\ref{fig:KARY}} \\ | |
4674 \hline $16$ & $3$ & $24$ & $27$ \\ | |
4675 \hline $32$ & $3$ & $45$ & $49$ \\ | |
4676 \hline $64$ & $4$ & $87$ & $92$ \\ | |
4677 \hline $128$ & $4$ & $167$ & $175$ \\ | |
4678 \hline $256$ & $5$ & $322$ & $335$ \\ | |
4679 \hline $512$ & $6$ & $628$ & $645$ \\ | |
4680 \hline $1024$ & $6$ & $1225$ & $1257$ \\ | |
4681 \hline $2048$ & $7$ & $2403$ & $2452$ \\ | |
4682 \hline $4096$ & $8$ & $4735$ & $4808$ \\ | |
4683 \hline | |
4684 \end{tabular} | |
4685 \end{small} | |
4686 \end{center} | |
4687 \caption{Optimal Values of $k$ for Sliding Window Exponentiation} | |
4688 \label{fig:OPTK2} | |
4689 \end{figure} | |
4690 | |
4691 \newpage\begin{figure}[!here] | |
4692 \begin{small} | |
4693 \begin{center} | |
4694 \begin{tabular}{l} | |
4695 \hline Algorithm \textbf{Sliding Window $k$-ary Exponentiation}. \\ | |
4696 \textbf{Input}. Integer $a$, $b$, $k$ and $t$ \\ | |
4697 \textbf{Output}. $c = a^b$ \\ | |
4698 \hline \\ | |
4699 1. $c \leftarrow 1$ \\ | |
4700 2. for $i$ from $t - 1$ to $0$ do \\ | |
4701 \hspace{3mm}2.1 If the $i$'th bit of $b$ is a zero then \\ | |
4702 \hspace{6mm}2.1.1 $c \leftarrow c^2$ \\ | |
4703 \hspace{3mm}2.2 else do \\ | |
4704 \hspace{6mm}2.2.1 $c \leftarrow c^{2^k}$ \\ | |
4705 \hspace{6mm}2.2.2 Extract the $k$ bits from $(b_{i}b_{i-1}\ldots b_{i-(k-1)})$ and store it in $g$. \\ | |
4706 \hspace{6mm}2.2.3 $c \leftarrow c \cdot a^g$ \\ | |
4707 \hspace{6mm}2.2.4 $i \leftarrow i - k$ \\ | |
4708 3. Return $c$. \\ | |
4709 \hline | |
4710 \end{tabular} | |
4711 \end{center} | |
4712 \end{small} | |
4713 \caption{Sliding Window $k$-ary Exponentiation} | |
4714 \end{figure} | |
4715 | |
4716 Similar to the previous algorithm this algorithm must have a special handler when fewer than $k$ bits are left in the exponent. While this | |
4717 algorithm requires the same number of squarings it can potentially have fewer multiplications. The pre-computed table $a^g$ is also half | |
4718 the size as the previous table. | |
4719 | |
4720 Consider the exponent $b = 111101011001000_2 \equiv 31432_{10}$ with $k = 3$ using both algorithms. The first algorithm will divide the exponent up as | |
4721 the following five $3$-bit words $b \equiv \left ( 111, 101, 011, 001, 000 \right )_{2}$. The second algorithm will break the | |
4722 exponent as $b \equiv \left ( 111, 101, 0, 110, 0, 100, 0 \right )_{2}$. The single digit $0$ in the second representation are where | |
4723 a single squaring took place instead of a squaring and multiplication. In total the first method requires $10$ multiplications and $18$ | |
4724 squarings. The second method requires $8$ multiplications and $18$ squarings. | |
4725 | |
4726 In general the sliding window method is never slower than the generic $k$-ary method and often it is slightly faster. | |
4727 | |
4728 \section{Modular Exponentiation} | |
4729 | |
4730 Modular exponentiation is essentially computing the power of a base within a finite field or ring. For example, computing | |
4731 $d \equiv a^b \mbox{ (mod }c\mbox{)}$ is a modular exponentiation. Instead of first computing $a^b$ and then reducing it | |
4732 modulo $c$ the intermediate result is reduced modulo $c$ after every squaring or multiplication operation. | |
4733 | |
4734 This guarantees that any intermediate result is bounded by $0 \le d \le c^2 - 2c + 1$ and can be reduced modulo $c$ quickly using | |
4735 one of the algorithms presented in ~REDUCTION~. | |
4736 | |
4737 Before the actual modular exponentiation algorithm can be written a wrapper algorithm must be written first. This algorithm | |
4738 will allow the exponent $b$ to be negative which is computed as $c \equiv \left (1 / a \right )^{\vert b \vert} \mbox{(mod }d\mbox{)}$. The | |
4739 value of $(1/a) \mbox{ mod }c$ is computed using the modular inverse (\textit{see \ref{sec;modinv}}). If no inverse exists the algorithm | |
4740 terminates with an error. | |
4741 | |
4742 \begin{figure}[!here] | |
4743 \begin{small} | |
4744 \begin{center} | |
4745 \begin{tabular}{l} | |
4746 \hline Algorithm \textbf{mp\_exptmod}. \\ | |
4747 \textbf{Input}. mp\_int $a$, $b$ and $c$ \\ | |
4748 \textbf{Output}. $y \equiv g^x \mbox{ (mod }p\mbox{)}$ \\ | |
4749 \hline \\ | |
4750 1. If $c.sign = MP\_NEG$ return(\textit{MP\_VAL}). \\ | |
4751 2. If $b.sign = MP\_NEG$ then \\ | |
4752 \hspace{3mm}2.1 $g' \leftarrow g^{-1} \mbox{ (mod }c\mbox{)}$ \\ | |
4753 \hspace{3mm}2.2 $x' \leftarrow \vert x \vert$ \\ | |
4754 \hspace{3mm}2.3 Compute $d \equiv g'^{x'} \mbox{ (mod }c\mbox{)}$ via recursion. \\ | |
4755 3. if $p$ is odd \textbf{OR} $p$ is a D.R. modulus then \\ | |
4756 \hspace{3mm}3.1 Compute $y \equiv g^{x} \mbox{ (mod }p\mbox{)}$ via algorithm mp\_exptmod\_fast. \\ | |
4757 4. else \\ | |
4758 \hspace{3mm}4.1 Compute $y \equiv g^{x} \mbox{ (mod }p\mbox{)}$ via algorithm s\_mp\_exptmod. \\ | |
4759 \hline | |
4760 \end{tabular} | |
4761 \end{center} | |
4762 \end{small} | |
4763 \caption{Algorithm mp\_exptmod} | |
4764 \end{figure} | |
4765 | |
4766 \textbf{Algorithm mp\_exptmod.} | |
4767 The first algorithm which actually performs modular exponentiation is algorithm s\_mp\_exptmod. It is a sliding window $k$-ary algorithm | |
4768 which uses Barrett reduction to reduce the product modulo $p$. The second algorithm mp\_exptmod\_fast performs the same operation | |
4769 except it uses either Montgomery or Diminished Radix reduction. The two latter reduction algorithms are clumped in the same exponentiation | |
4770 algorithm since their arguments are essentially the same (\textit{two mp\_ints and one mp\_digit}). | |
4771 | |
4772 EXAM,bn_mp_exptmod.c | |
4773 | |
4774 In order to keep the algorithms in a known state the first step on line @29,if@ is to reject any negative modulus as input. If the exponent is | |
4775 negative the algorithm tries to perform a modular exponentiation with the modular inverse of the base $G$. The temporary variable $tmpG$ is assigned | |
4776 the modular inverse of $G$ and $tmpX$ is assigned the absolute value of $X$. The algorithm will recuse with these new values with a positive | |
4777 exponent. | |
4778 | |
4779 If the exponent is positive the algorithm resumes the exponentiation. Line @63,dr_@ determines if the modulus is of the restricted Diminished Radix | |
4780 form. If it is not line @65,reduce@ attempts to determine if it is of a unrestricted Diminished Radix form. The integer $dr$ will take on one | |
4781 of three values. | |
4782 | |
4783 \begin{enumerate} | |
4784 \item $dr = 0$ means that the modulus is not of either restricted or unrestricted Diminished Radix form. | |
4785 \item $dr = 1$ means that the modulus is of restricted Diminished Radix form. | |
4786 \item $dr = 2$ means that the modulus is of unrestricted Diminished Radix form. | |
4787 \end{enumerate} | |
4788 | |
4789 Line @69,if@ determines if the fast modular exponentiation algorithm can be used. It is allowed if $dr \ne 0$ or if the modulus is odd. Otherwise, | |
4790 the slower s\_mp\_exptmod algorithm is used which uses Barrett reduction. | |
4791 | |
4792 \subsection{Barrett Modular Exponentiation} | |
4793 | |
4794 \newpage\begin{figure}[!here] | |
4795 \begin{small} | |
4796 \begin{center} | |
4797 \begin{tabular}{l} | |
4798 \hline Algorithm \textbf{s\_mp\_exptmod}. \\ | |
4799 \textbf{Input}. mp\_int $a$, $b$ and $c$ \\ | |
4800 \textbf{Output}. $y \equiv g^x \mbox{ (mod }p\mbox{)}$ \\ | |
4801 \hline \\ | |
4802 1. $k \leftarrow lg(x)$ \\ | |
4803 2. $winsize \leftarrow \left \lbrace \begin{array}{ll} | |
4804 2 & \mbox{if }k \le 7 \\ | |
4805 3 & \mbox{if }7 < k \le 36 \\ | |
4806 4 & \mbox{if }36 < k \le 140 \\ | |
4807 5 & \mbox{if }140 < k \le 450 \\ | |
4808 6 & \mbox{if }450 < k \le 1303 \\ | |
4809 7 & \mbox{if }1303 < k \le 3529 \\ | |
4810 8 & \mbox{if }3529 < k \\ | |
4811 \end{array} \right .$ \\ | |
4812 3. Initialize $2^{winsize}$ mp\_ints in an array named $M$ and one mp\_int named $\mu$ \\ | |
4813 4. Calculate the $\mu$ required for Barrett Reduction (\textit{mp\_reduce\_setup}). \\ | |
4814 5. $M_1 \leftarrow g \mbox{ (mod }p\mbox{)}$ \\ | |
4815 \\ | |
4816 Setup the table of small powers of $g$. First find $g^{2^{winsize}}$ and then all multiples of it. \\ | |
4817 6. $k \leftarrow 2^{winsize - 1}$ \\ | |
4818 7. $M_{k} \leftarrow M_1$ \\ | |
4819 8. for $ix$ from 0 to $winsize - 2$ do \\ | |
4820 \hspace{3mm}8.1 $M_k \leftarrow \left ( M_k \right )^2$ (\textit{mp\_sqr}) \\ | |
4821 \hspace{3mm}8.2 $M_k \leftarrow M_k \mbox{ (mod }p\mbox{)}$ (\textit{mp\_reduce}) \\ | |
4822 9. for $ix$ from $2^{winsize - 1} + 1$ to $2^{winsize} - 1$ do \\ | |
4823 \hspace{3mm}9.1 $M_{ix} \leftarrow M_{ix - 1} \cdot M_{1}$ (\textit{mp\_mul}) \\ | |
4824 \hspace{3mm}9.2 $M_{ix} \leftarrow M_{ix} \mbox{ (mod }p\mbox{)}$ (\textit{mp\_reduce}) \\ | |
4825 10. $res \leftarrow 1$ \\ | |
4826 \\ | |
4827 Start Sliding Window. \\ | |
4828 11. $mode \leftarrow 0, bitcnt \leftarrow 1, buf \leftarrow 0, digidx \leftarrow x.used - 1, bitcpy \leftarrow 0, bitbuf \leftarrow 0$ \\ | |
4829 12. Loop \\ | |
4830 \hspace{3mm}12.1 $bitcnt \leftarrow bitcnt - 1$ \\ | |
4831 \hspace{3mm}12.2 If $bitcnt = 0$ then do \\ | |
4832 \hspace{6mm}12.2.1 If $digidx = -1$ goto step 13. \\ | |
4833 \hspace{6mm}12.2.2 $buf \leftarrow x_{digidx}$ \\ | |
4834 \hspace{6mm}12.2.3 $digidx \leftarrow digidx - 1$ \\ | |
4835 \hspace{6mm}12.2.4 $bitcnt \leftarrow lg(\beta)$ \\ | |
4836 Continued on next page. \\ | |
4837 \hline | |
4838 \end{tabular} | |
4839 \end{center} | |
4840 \end{small} | |
4841 \caption{Algorithm s\_mp\_exptmod} | |
4842 \end{figure} | |
4843 | |
4844 \newpage\begin{figure}[!here] | |
4845 \begin{small} | |
4846 \begin{center} | |
4847 \begin{tabular}{l} | |
4848 \hline Algorithm \textbf{s\_mp\_exptmod} (\textit{continued}). \\ | |
4849 \textbf{Input}. mp\_int $a$, $b$ and $c$ \\ | |
4850 \textbf{Output}. $y \equiv g^x \mbox{ (mod }p\mbox{)}$ \\ | |
4851 \hline \\ | |
4852 \hspace{3mm}12.3 $y \leftarrow (buf >> (lg(\beta) - 1))$ AND $1$ \\ | |
4853 \hspace{3mm}12.4 $buf \leftarrow buf << 1$ \\ | |
4854 \hspace{3mm}12.5 if $mode = 0$ and $y = 0$ then goto step 12. \\ | |
4855 \hspace{3mm}12.6 if $mode = 1$ and $y = 0$ then do \\ | |
4856 \hspace{6mm}12.6.1 $res \leftarrow res^2$ \\ | |
4857 \hspace{6mm}12.6.2 $res \leftarrow res \mbox{ (mod }p\mbox{)}$ \\ | |
4858 \hspace{6mm}12.6.3 Goto step 12. \\ | |
4859 \hspace{3mm}12.7 $bitcpy \leftarrow bitcpy + 1$ \\ | |
4860 \hspace{3mm}12.8 $bitbuf \leftarrow bitbuf + (y << (winsize - bitcpy))$ \\ | |
4861 \hspace{3mm}12.9 $mode \leftarrow 2$ \\ | |
4862 \hspace{3mm}12.10 If $bitcpy = winsize$ then do \\ | |
4863 \hspace{6mm}Window is full so perform the squarings and single multiplication. \\ | |
4864 \hspace{6mm}12.10.1 for $ix$ from $0$ to $winsize -1$ do \\ | |
4865 \hspace{9mm}12.10.1.1 $res \leftarrow res^2$ \\ | |
4866 \hspace{9mm}12.10.1.2 $res \leftarrow res \mbox{ (mod }p\mbox{)}$ \\ | |
4867 \hspace{6mm}12.10.2 $res \leftarrow res \cdot M_{bitbuf}$ \\ | |
4868 \hspace{6mm}12.10.3 $res \leftarrow res \mbox{ (mod }p\mbox{)}$ \\ | |
4869 \hspace{6mm}Reset the window. \\ | |
4870 \hspace{6mm}12.10.4 $bitcpy \leftarrow 0, bitbuf \leftarrow 0, mode \leftarrow 1$ \\ | |
4871 \\ | |
4872 No more windows left. Check for residual bits of exponent. \\ | |
4873 13. If $mode = 2$ and $bitcpy > 0$ then do \\ | |
4874 \hspace{3mm}13.1 for $ix$ form $0$ to $bitcpy - 1$ do \\ | |
4875 \hspace{6mm}13.1.1 $res \leftarrow res^2$ \\ | |
4876 \hspace{6mm}13.1.2 $res \leftarrow res \mbox{ (mod }p\mbox{)}$ \\ | |
4877 \hspace{6mm}13.1.3 $bitbuf \leftarrow bitbuf << 1$ \\ | |
4878 \hspace{6mm}13.1.4 If $bitbuf$ AND $2^{winsize} \ne 0$ then do \\ | |
4879 \hspace{9mm}13.1.4.1 $res \leftarrow res \cdot M_{1}$ \\ | |
4880 \hspace{9mm}13.1.4.2 $res \leftarrow res \mbox{ (mod }p\mbox{)}$ \\ | |
4881 14. $y \leftarrow res$ \\ | |
4882 15. Clear $res$, $mu$ and the $M$ array. \\ | |
4883 16. Return(\textit{MP\_OKAY}). \\ | |
4884 \hline | |
4885 \end{tabular} | |
4886 \end{center} | |
4887 \end{small} | |
4888 \caption{Algorithm s\_mp\_exptmod (continued)} | |
4889 \end{figure} | |
4890 | |
4891 \textbf{Algorithm s\_mp\_exptmod.} | |
4892 This algorithm computes the $x$'th power of $g$ modulo $p$ and stores the result in $y$. It takes advantage of the Barrett reduction | |
4893 algorithm to keep the product small throughout the algorithm. | |
4894 | |
4895 The first two steps determine the optimal window size based on the number of bits in the exponent. The larger the exponent the | |
4896 larger the window size becomes. After a window size $winsize$ has been chosen an array of $2^{winsize}$ mp\_int variables is allocated. This | |
4897 table will hold the values of $g^x \mbox{ (mod }p\mbox{)}$ for $2^{winsize - 1} \le x < 2^{winsize}$. | |
4898 | |
4899 After the table is allocated the first power of $g$ is found. Since $g \ge p$ is allowed it must be first reduced modulo $p$ to make | |
4900 the rest of the algorithm more efficient. The first element of the table at $2^{winsize - 1}$ is found by squaring $M_1$ successively $winsize - 2$ | |
4901 times. The rest of the table elements are found by multiplying the previous element by $M_1$ modulo $p$. | |
4902 | |
4903 Now that the table is available the sliding window may begin. The following list describes the functions of all the variables in the window. | |
4904 \begin{enumerate} | |
4905 \item The variable $mode$ dictates how the bits of the exponent are interpreted. | |
4906 \begin{enumerate} | |
4907 \item When $mode = 0$ the bits are ignored since no non-zero bit of the exponent has been seen yet. For example, if the exponent were simply | |
4908 $1$ then there would be $lg(\beta) - 1$ zero bits before the first non-zero bit. In this case bits are ignored until a non-zero bit is found. | |
4909 \item When $mode = 1$ a non-zero bit has been seen before and a new $winsize$-bit window has not been formed yet. In this mode leading $0$ bits | |
4910 are read and a single squaring is performed. If a non-zero bit is read a new window is created. | |
4911 \item When $mode = 2$ the algorithm is in the middle of forming a window and new bits are appended to the window from the most significant bit | |
4912 downwards. | |
4913 \end{enumerate} | |
4914 \item The variable $bitcnt$ indicates how many bits are left in the current digit of the exponent left to be read. When it reaches zero a new digit | |
4915 is fetched from the exponent. | |
4916 \item The variable $buf$ holds the currently read digit of the exponent. | |
4917 \item The variable $digidx$ is an index into the exponents digits. It starts at the leading digit $x.used - 1$ and moves towards the trailing digit. | |
4918 \item The variable $bitcpy$ indicates how many bits are in the currently formed window. When it reaches $winsize$ the window is flushed and | |
4919 the appropriate operations performed. | |
4920 \item The variable $bitbuf$ holds the current bits of the window being formed. | |
4921 \end{enumerate} | |
4922 | |
4923 All of step 12 is the window processing loop. It will iterate while there are digits available form the exponent to read. The first step | |
4924 inside this loop is to extract a new digit if no more bits are available in the current digit. If there are no bits left a new digit is | |
4925 read and if there are no digits left than the loop terminates. | |
4926 | |
4927 After a digit is made available step 12.3 will extract the most significant bit of the current digit and move all other bits in the digit | |
4928 upwards. In effect the digit is read from most significant bit to least significant bit and since the digits are read from leading to | |
4929 trailing edges the entire exponent is read from most significant bit to least significant bit. | |
4930 | |
4931 At step 12.5 if the $mode$ and currently extracted bit $y$ are both zero the bit is ignored and the next bit is read. This prevents the | |
4932 algorithm from having to perform trivial squaring and reduction operations before the first non-zero bit is read. Step 12.6 and 12.7-10 handle | |
4933 the two cases of $mode = 1$ and $mode = 2$ respectively. | |
4934 | |
4935 FIGU,expt_state,Sliding Window State Diagram | |
4936 | |
4937 By step 13 there are no more digits left in the exponent. However, there may be partial bits in the window left. If $mode = 2$ then | |
4938 a Left-to-Right algorithm is used to process the remaining few bits. | |
4939 | |
4940 EXAM,bn_s_mp_exptmod.c | |
4941 | |
4942 Lines @26,if@ through @40,}@ determine the optimal window size based on the length of the exponent in bits. The window divisions are sorted | |
4943 from smallest to greatest so that in each \textbf{if} statement only one condition must be tested. For example, by the \textbf{if} statement | |
4944 on line @32,if@ the value of $x$ is already known to be greater than $140$. | |
4945 | |
4946 The conditional piece of code beginning on line @42,ifdef@ allows the window size to be restricted to five bits. This logic is used to ensure | |
4947 the table of precomputed powers of $G$ remains relatively small. | |
4948 | |
4949 The for loop on line @49,for@ initializes the $M$ array while lines @59,mp_init@ and @62,mp_reduce@ compute the value of $\mu$ required for | |
4950 Barrett reduction. | |
4951 | |
4952 -- More later. | |
4953 | |
4954 \section{Quick Power of Two} | |
4955 Calculating $b = 2^a$ can be performed much quicker than with any of the previous algorithms. Recall that a logical shift left $m << k$ is | |
4956 equivalent to $m \cdot 2^k$. By this logic when $m = 1$ a quick power of two can be achieved. | |
4957 | |
4958 \begin{figure}[!here] | |
4959 \begin{small} | |
4960 \begin{center} | |
4961 \begin{tabular}{l} | |
4962 \hline Algorithm \textbf{mp\_2expt}. \\ | |
4963 \textbf{Input}. integer $b$ \\ | |
4964 \textbf{Output}. $a \leftarrow 2^b$ \\ | |
4965 \hline \\ | |
4966 1. $a \leftarrow 0$ \\ | |
4967 2. If $a.alloc < \lfloor b / lg(\beta) \rfloor + 1$ then grow $a$ appropriately. \\ | |
4968 3. $a.used \leftarrow \lfloor b / lg(\beta) \rfloor + 1$ \\ | |
4969 4. $a_{\lfloor b / lg(\beta) \rfloor} \leftarrow 1 << (b \mbox{ mod } lg(\beta))$ \\ | |
4970 5. Return(\textit{MP\_OKAY}). \\ | |
4971 \hline | |
4972 \end{tabular} | |
4973 \end{center} | |
4974 \end{small} | |
4975 \caption{Algorithm mp\_2expt} | |
4976 \end{figure} | |
4977 | |
4978 \textbf{Algorithm mp\_2expt.} | |
4979 | |
4980 EXAM,bn_mp_2expt.c | |
4981 | |
4982 \chapter{Higher Level Algorithms} | |
4983 | |
4984 This chapter discusses the various higher level algorithms that are required to complete a well rounded multiple precision integer package. These | |
4985 routines are less performance oriented than the algorithms of chapters five, six and seven but are no less important. | |
4986 | |
4987 The first section describes a method of integer division with remainder that is universally well known. It provides the signed division logic | |
4988 for the package. The subsequent section discusses a set of algorithms which allow a single digit to be the 2nd operand for a variety of operations. | |
4989 These algorithms serve mostly to simplify other algorithms where small constants are required. The last two sections discuss how to manipulate | |
4990 various representations of integers. For example, converting from an mp\_int to a string of character. | |
4991 | |
4992 \section{Integer Division with Remainder} | |
4993 \label{sec:division} | |
4994 | |
4995 Integer division aside from modular exponentiation is the most intensive algorithm to compute. Like addition, subtraction and multiplication | |
4996 the basis of this algorithm is the long-hand division algorithm taught to school children. Throughout this discussion several common variables | |
4997 will be used. Let $x$ represent the divisor and $y$ represent the dividend. Let $q$ represent the integer quotient $\lfloor y / x \rfloor$ and | |
4998 let $r$ represent the remainder $r = y - x \lfloor y / x \rfloor$. The following simple algorithm will be used to start the discussion. | |
4999 | |
5000 \newpage\begin{figure}[!here] | |
5001 \begin{small} | |
5002 \begin{center} | |
5003 \begin{tabular}{l} | |
5004 \hline Algorithm \textbf{Radix-$\beta$ Integer Division}. \\ | |
5005 \textbf{Input}. integer $x$ and $y$ \\ | |
5006 \textbf{Output}. $q = \lfloor y/x\rfloor, r = y - xq$ \\ | |
5007 \hline \\ | |
5008 1. $q \leftarrow 0$ \\ | |
5009 2. $n \leftarrow \vert \vert y \vert \vert - \vert \vert x \vert \vert$ \\ | |
5010 3. for $t$ from $n$ down to $0$ do \\ | |
5011 \hspace{3mm}3.1 Maximize $k$ such that $kx\beta^t$ is less than or equal to $y$ and $(k + 1)x\beta^t$ is greater. \\ | |
5012 \hspace{3mm}3.2 $q \leftarrow q + k\beta^t$ \\ | |
5013 \hspace{3mm}3.3 $y \leftarrow y - kx\beta^t$ \\ | |
5014 4. $r \leftarrow y$ \\ | |
5015 5. Return($q, r$) \\ | |
5016 \hline | |
5017 \end{tabular} | |
5018 \end{center} | |
5019 \end{small} | |
5020 \caption{Algorithm Radix-$\beta$ Integer Division} | |
5021 \label{fig:raddiv} | |
5022 \end{figure} | |
5023 | |
5024 As children we are taught this very simple algorithm for the case of $\beta = 10$. Almost instinctively several optimizations are taught for which | |
5025 their reason of existing are never explained. For this example let $y = 5471$ represent the dividend and $x = 23$ represent the divisor. | |
5026 | |
5027 To find the first digit of the quotient the value of $k$ must be maximized such that $kx\beta^t$ is less than or equal to $y$ and | |
5028 simultaneously $(k + 1)x\beta^t$ is greater than $y$. Implicitly $k$ is the maximum value the $t$'th digit of the quotient may have. The habitual method | |
5029 used to find the maximum is to ``eyeball'' the two numbers, typically only the leading digits and quickly estimate a quotient. By only using leading | |
5030 digits a much simpler division may be used to form an educated guess at what the value must be. In this case $k = \lfloor 54/23\rfloor = 2$ quickly | |
5031 arises as a possible solution. Indeed $2x\beta^2 = 4600$ is less than $y = 5471$ and simultaneously $(k + 1)x\beta^2 = 6900$ is larger than $y$. | |
5032 As a result $k\beta^2$ is added to the quotient which now equals $q = 200$ and $4600$ is subtracted from $y$ to give a remainder of $y = 841$. | |
5033 | |
5034 Again this process is repeated to produce the quotient digit $k = 3$ which makes the quotient $q = 200 + 3\beta = 230$ and the remainder | |
5035 $y = 841 - 3x\beta = 181$. Finally the last iteration of the loop produces $k = 7$ which leads to the quotient $q = 230 + 7 = 237$ and the | |
5036 remainder $y = 181 - 7x = 20$. The final quotient and remainder found are $q = 237$ and $r = y = 20$ which are indeed correct since | |
5037 $237 \cdot 23 + 20 = 5471$ is true. | |
5038 | |
5039 \subsection{Quotient Estimation} | |
5040 \label{sec:divest} | |
5041 As alluded to earlier the quotient digit $k$ can be estimated from only the leading digits of both the divisor and dividend. When $p$ leading | |
5042 digits are used from both the divisor and dividend to form an estimation the accuracy of the estimation rises as $p$ grows. Technically | |
5043 speaking the estimation is based on assuming the lower $\vert \vert y \vert \vert - p$ and $\vert \vert x \vert \vert - p$ lower digits of the | |
5044 dividend and divisor are zero. | |
5045 | |
5046 The value of the estimation may off by a few values in either direction and in general is fairly correct. A simplification \cite[pp. 271]{TAOCPV2} | |
5047 of the estimation technique is to use $t + 1$ digits of the dividend and $t$ digits of the divisor, in particularly when $t = 1$. The estimate | |
5048 using this technique is never too small. For the following proof let $t = \vert \vert y \vert \vert - 1$ and $s = \vert \vert x \vert \vert - 1$ | |
5049 represent the most significant digits of the dividend and divisor respectively. | |
5050 | |
5051 \textbf{Proof.}\textit{ The quotient $\hat k = \lfloor (y_t\beta + y_{t-1}) / x_s \rfloor$ is greater than or equal to | |
5052 $k = \lfloor y / (x \cdot \beta^{\vert \vert y \vert \vert - \vert \vert x \vert \vert - 1}) \rfloor$. } | |
5053 The first obvious case is when $\hat k = \beta - 1$ in which case the proof is concluded since the real quotient cannot be larger. For all other | |
5054 cases $\hat k = \lfloor (y_t\beta + y_{t-1}) / x_s \rfloor$ and $\hat k x_s \ge y_t\beta + y_{t-1} - x_s + 1$. The latter portion of the inequalility | |
5055 $-x_s + 1$ arises from the fact that a truncated integer division will give the same quotient for at most $x_s - 1$ values. Next a series of | |
5056 inequalities will prove the hypothesis. | |
5057 | |
5058 \begin{equation} | |
5059 y - \hat k x \le y - \hat k x_s\beta^s | |
5060 \end{equation} | |
5061 | |
5062 This is trivially true since $x \ge x_s\beta^s$. Next we replace $\hat kx_s\beta^s$ by the previous inequality for $\hat kx_s$. | |
5063 | |
5064 \begin{equation} | |
5065 y - \hat k x \le y_t\beta^t + \ldots + y_0 - (y_t\beta^t + y_{t-1}\beta^{t-1} - x_s\beta^t + \beta^s) | |
5066 \end{equation} | |
5067 | |
5068 By simplifying the previous inequality the following inequality is formed. | |
5069 | |
5070 \begin{equation} | |
5071 y - \hat k x \le y_{t-2}\beta^{t-2} + \ldots + y_0 + x_s\beta^s - \beta^s | |
5072 \end{equation} | |
5073 | |
5074 Subsequently, | |
5075 | |
5076 \begin{equation} | |
5077 y_{t-2}\beta^{t-2} + \ldots + y_0 + x_s\beta^s - \beta^s < x_s\beta^s \le x | |
5078 \end{equation} | |
5079 | |
5080 Which proves that $y - \hat kx \le x$ and by consequence $\hat k \ge k$ which concludes the proof. \textbf{QED} | |
5081 | |
5082 | |
5083 \subsection{Normalized Integers} | |
5084 For the purposes of division a normalized input is when the divisors leading digit $x_n$ is greater than or equal to $\beta / 2$. By multiplying both | |
5085 $x$ and $y$ by $j = \lfloor (\beta / 2) / x_n \rfloor$ the quotient remains unchanged and the remainder is simply $j$ times the original | |
5086 remainder. The purpose of normalization is to ensure the leading digit of the divisor is sufficiently large such that the estimated quotient will | |
5087 lie in the domain of a single digit. Consider the maximum dividend $(\beta - 1) \cdot \beta + (\beta - 1)$ and the minimum divisor $\beta / 2$. | |
5088 | |
5089 \begin{equation} | |
5090 {{\beta^2 - 1} \over { \beta / 2}} \le 2\beta - {2 \over \beta} | |
5091 \end{equation} | |
5092 | |
5093 At most the quotient approaches $2\beta$, however, in practice this will not occur since that would imply the previous quotient digit was too small. | |
5094 | |
5095 \subsection{Radix-$\beta$ Division with Remainder} | |
5096 \newpage\begin{figure}[!here] | |
5097 \begin{small} | |
5098 \begin{center} | |
5099 \begin{tabular}{l} | |
5100 \hline Algorithm \textbf{mp\_div}. \\ | |
5101 \textbf{Input}. mp\_int $a, b$ \\ | |
5102 \textbf{Output}. $c = \lfloor a/b \rfloor$, $d = a - bc$ \\ | |
5103 \hline \\ | |
5104 1. If $b = 0$ return(\textit{MP\_VAL}). \\ | |
5105 2. If $\vert a \vert < \vert b \vert$ then do \\ | |
5106 \hspace{3mm}2.1 $d \leftarrow a$ \\ | |
5107 \hspace{3mm}2.2 $c \leftarrow 0$ \\ | |
5108 \hspace{3mm}2.3 Return(\textit{MP\_OKAY}). \\ | |
5109 \\ | |
5110 Setup the quotient to receive the digits. \\ | |
5111 3. Grow $q$ to $a.used + 2$ digits. \\ | |
5112 4. $q \leftarrow 0$ \\ | |
5113 5. $x \leftarrow \vert a \vert , y \leftarrow \vert b \vert$ \\ | |
5114 6. $sign \leftarrow \left \lbrace \begin{array}{ll} | |
5115 MP\_ZPOS & \mbox{if }a.sign = b.sign \\ | |
5116 MP\_NEG & \mbox{otherwise} \\ | |
5117 \end{array} \right .$ \\ | |
5118 \\ | |
5119 Normalize the inputs such that the leading digit of $y$ is greater than or equal to $\beta / 2$. \\ | |
5120 7. $norm \leftarrow (lg(\beta) - 1) - (\lceil lg(y) \rceil \mbox{ (mod }lg(\beta)\mbox{)})$ \\ | |
5121 8. $x \leftarrow x \cdot 2^{norm}, y \leftarrow y \cdot 2^{norm}$ \\ | |
5122 \\ | |
5123 Find the leading digit of the quotient. \\ | |
5124 9. $n \leftarrow x.used - 1, t \leftarrow y.used - 1$ \\ | |
5125 10. $y \leftarrow y \cdot \beta^{n - t}$ \\ | |
5126 11. While ($x \ge y$) do \\ | |
5127 \hspace{3mm}11.1 $q_{n - t} \leftarrow q_{n - t} + 1$ \\ | |
5128 \hspace{3mm}11.2 $x \leftarrow x - y$ \\ | |
5129 12. $y \leftarrow \lfloor y / \beta^{n-t} \rfloor$ \\ | |
5130 \\ | |
5131 Continued on the next page. \\ | |
5132 \hline | |
5133 \end{tabular} | |
5134 \end{center} | |
5135 \end{small} | |
5136 \caption{Algorithm mp\_div} | |
5137 \end{figure} | |
5138 | |
5139 \newpage\begin{figure}[!here] | |
5140 \begin{small} | |
5141 \begin{center} | |
5142 \begin{tabular}{l} | |
5143 \hline Algorithm \textbf{mp\_div} (continued). \\ | |
5144 \textbf{Input}. mp\_int $a, b$ \\ | |
5145 \textbf{Output}. $c = \lfloor a/b \rfloor$, $d = a - bc$ \\ | |
5146 \hline \\ | |
5147 Now find the remainder fo the digits. \\ | |
5148 13. for $i$ from $n$ down to $(t + 1)$ do \\ | |
5149 \hspace{3mm}13.1 If $i > x.used$ then jump to the next iteration of this loop. \\ | |
5150 \hspace{3mm}13.2 If $x_{i} = y_{t}$ then \\ | |
5151 \hspace{6mm}13.2.1 $q_{i - t - 1} \leftarrow \beta - 1$ \\ | |
5152 \hspace{3mm}13.3 else \\ | |
5153 \hspace{6mm}13.3.1 $\hat r \leftarrow x_{i} \cdot \beta + x_{i - 1}$ \\ | |
5154 \hspace{6mm}13.3.2 $\hat r \leftarrow \lfloor \hat r / y_{t} \rfloor$ \\ | |
5155 \hspace{6mm}13.3.3 $q_{i - t - 1} \leftarrow \hat r$ \\ | |
5156 \hspace{3mm}13.4 $q_{i - t - 1} \leftarrow q_{i - t - 1} + 1$ \\ | |
5157 \\ | |
5158 Fixup quotient estimation. \\ | |
5159 \hspace{3mm}13.5 Loop \\ | |
5160 \hspace{6mm}13.5.1 $q_{i - t - 1} \leftarrow q_{i - t - 1} - 1$ \\ | |
5161 \hspace{6mm}13.5.2 t$1 \leftarrow 0$ \\ | |
5162 \hspace{6mm}13.5.3 t$1_0 \leftarrow y_{t - 1}, $ t$1_1 \leftarrow y_t,$ t$1.used \leftarrow 2$ \\ | |
5163 \hspace{6mm}13.5.4 $t1 \leftarrow t1 \cdot q_{i - t - 1}$ \\ | |
5164 \hspace{6mm}13.5.5 t$2_0 \leftarrow x_{i - 2}, $ t$2_1 \leftarrow x_{i - 1}, $ t$2_2 \leftarrow x_i, $ t$2.used \leftarrow 3$ \\ | |
5165 \hspace{6mm}13.5.6 If $\vert t1 \vert > \vert t2 \vert$ then goto step 13.5. \\ | |
5166 \hspace{3mm}13.6 t$1 \leftarrow y \cdot q_{i - t - 1}$ \\ | |
5167 \hspace{3mm}13.7 t$1 \leftarrow $ t$1 \cdot \beta^{i - t - 1}$ \\ | |
5168 \hspace{3mm}13.8 $x \leftarrow x - $ t$1$ \\ | |
5169 \hspace{3mm}13.9 If $x.sign = MP\_NEG$ then \\ | |
5170 \hspace{6mm}13.10 t$1 \leftarrow y$ \\ | |
5171 \hspace{6mm}13.11 t$1 \leftarrow $ t$1 \cdot \beta^{i - t - 1}$ \\ | |
5172 \hspace{6mm}13.12 $x \leftarrow x + $ t$1$ \\ | |
5173 \hspace{6mm}13.13 $q_{i - t - 1} \leftarrow q_{i - t - 1} - 1$ \\ | |
5174 \\ | |
5175 Finalize the result. \\ | |
5176 14. Clamp excess digits of $q$ \\ | |
5177 15. $c \leftarrow q, c.sign \leftarrow sign$ \\ | |
5178 16. $x.sign \leftarrow a.sign$ \\ | |
5179 17. $d \leftarrow \lfloor x / 2^{norm} \rfloor$ \\ | |
5180 18. Return(\textit{MP\_OKAY}). \\ | |
5181 \hline | |
5182 \end{tabular} | |
5183 \end{center} | |
5184 \end{small} | |
5185 \caption{Algorithm mp\_div (continued)} | |
5186 \end{figure} | |
5187 \textbf{Algorithm mp\_div.} | |
5188 This algorithm will calculate quotient and remainder from an integer division given a dividend and divisor. The algorithm is a signed | |
5189 division and will produce a fully qualified quotient and remainder. | |
5190 | |
5191 First the divisor $b$ must be non-zero which is enforced in step one. If the divisor is larger than the dividend than the quotient is implicitly | |
5192 zero and the remainder is the dividend. | |
5193 | |
5194 After the first two trivial cases of inputs are handled the variable $q$ is setup to receive the digits of the quotient. Two unsigned copies of the | |
5195 divisor $y$ and dividend $x$ are made as well. The core of the division algorithm is an unsigned division and will only work if the values are | |
5196 positive. Now the two values $x$ and $y$ must be normalized such that the leading digit of $y$ is greater than or equal to $\beta / 2$. | |
5197 This is performed by shifting both to the left by enough bits to get the desired normalization. | |
5198 | |
5199 At this point the division algorithm can begin producing digits of the quotient. Recall that maximum value of the estimation used is | |
5200 $2\beta - {2 \over \beta}$ which means that a digit of the quotient must be first produced by another means. In this case $y$ is shifted | |
5201 to the left (\textit{step ten}) so that it has the same number of digits as $x$. The loop on step eleven will subtract multiples of the | |
5202 shifted copy of $y$ until $x$ is smaller. Since the leading digit of $y$ is greater than or equal to $\beta/2$ this loop will iterate at most two | |
5203 times to produce the desired leading digit of the quotient. | |
5204 | |
5205 Now the remainder of the digits can be produced. The equation $\hat q = \lfloor {{x_i \beta + x_{i-1}}\over y_t} \rfloor$ is used to fairly | |
5206 accurately approximate the true quotient digit. The estimation can in theory produce an estimation as high as $2\beta - {2 \over \beta}$ but by | |
5207 induction the upper quotient digit is correct (\textit{as established on step eleven}) and the estimate must be less than $\beta$. | |
5208 | |
5209 Recall from section~\ref{sec:divest} that the estimation is never too low but may be too high. The next step of the estimation process is | |
5210 to refine the estimation. The loop on step 13.5 uses $x_i\beta^2 + x_{i-1}\beta + x_{i-2}$ and $q_{i - t - 1}(y_t\beta + y_{t-1})$ as a higher | |
5211 order approximation to adjust the quotient digit. | |
5212 | |
5213 After both phases of estimation the quotient digit may still be off by a value of one\footnote{This is similar to the error introduced | |
5214 by optimizing Barrett reduction.}. Steps 13.6 and 13.7 subtract the multiple of the divisor from the dividend (\textit{Similar to step 3.3 of | |
5215 algorithm~\ref{fig:raddiv}} and then subsequently add a multiple of the divisor if the quotient was too large. | |
5216 | |
5217 Now that the quotient has been determine finializing the result is a matter of clamping the quotient, fixing the sizes and de-normalizing the | |
5218 remainder. An important aspect of this algorithm seemingly overlooked in other descriptions such as that of Algorithm 14.20 HAC \cite[pp. 598]{HAC} | |
5219 is that when the estimations are being made (\textit{inside the loop on step 13.5}) that the digits $y_{t-1}$, $x_{i-2}$ and $x_{i-1}$ may lie | |
5220 outside their respective boundaries. For example, if $t = 0$ or $i \le 1$ then the digits would be undefined. In those cases the digits should | |
5221 respectively be replaced with a zero. | |
5222 | |
5223 EXAM,bn_mp_div.c | |
5224 | |
5225 The implementation of this algorithm differs slightly from the pseudo code presented previously. In this algorithm either of the quotient $c$ or | |
5226 remainder $d$ may be passed as a \textbf{NULL} pointer which indicates their value is not desired. For example, the C code to call the division | |
5227 algorithm with only the quotient is | |
5228 | |
5229 \begin{verbatim} | |
5230 mp_div(&a, &b, &c, NULL); /* c = [a/b] */ | |
5231 \end{verbatim} | |
5232 | |
5233 Lines @37,if@ and @42,if@ handle the two trivial cases of inputs which are division by zero and dividend smaller than the divisor | |
5234 respectively. After the two trivial cases all of the temporary variables are initialized. Line @76,neg@ determines the sign of | |
5235 the quotient and line @77,sign@ ensures that both $x$ and $y$ are positive. | |
5236 | |
5237 The number of bits in the leading digit is calculated on line @80,norm@. Implictly an mp\_int with $r$ digits will require $lg(\beta)(r-1) + k$ bits | |
5238 of precision which when reduced modulo $lg(\beta)$ produces the value of $k$. In this case $k$ is the number of bits in the leading digit which is | |
5239 exactly what is required. For the algorithm to operate $k$ must equal $lg(\beta) - 1$ and when it does not the inputs must be normalized by shifting | |
5240 them to the left by $lg(\beta) - 1 - k$ bits. | |
5241 | |
5242 Throughout the variables $n$ and $t$ will represent the highest digit of $x$ and $y$ respectively. These are first used to produce the | |
5243 leading digit of the quotient. The loop beginning on line @113,for@ will produce the remainder of the quotient digits. | |
5244 | |
5245 The conditional ``continue'' on line @114,if@ is used to prevent the algorithm from reading past the leading edge of $x$ which can occur when the | |
5246 algorithm eliminates multiple non-zero digits in a single iteration. This ensures that $x_i$ is always non-zero since by definition the digits | |
5247 above the $i$'th position $x$ must be zero in order for the quotient to be precise\footnote{Precise as far as integer division is concerned.}. | |
5248 | |
5249 Lines @142,t1@, @143,t1@ and @150,t2@ through @152,t2@ manually construct the high accuracy estimations by setting the digits of the two mp\_int | |
5250 variables directly. | |
5251 | |
5252 \section{Single Digit Helpers} | |
5253 | |
5254 This section briefly describes a series of single digit helper algorithms which come in handy when working with small constants. All of | |
5255 the helper functions assume the single digit input is positive and will treat them as such. | |
5256 | |
5257 \subsection{Single Digit Addition and Subtraction} | |
5258 | |
5259 Both addition and subtraction are performed by ``cheating'' and using mp\_set followed by the higher level addition or subtraction | |
5260 algorithms. As a result these algorithms are subtantially simpler with a slight cost in performance. | |
5261 | |
5262 \newpage\begin{figure}[!here] | |
5263 \begin{small} | |
5264 \begin{center} | |
5265 \begin{tabular}{l} | |
5266 \hline Algorithm \textbf{mp\_add\_d}. \\ | |
5267 \textbf{Input}. mp\_int $a$ and a mp\_digit $b$ \\ | |
5268 \textbf{Output}. $c = a + b$ \\ | |
5269 \hline \\ | |
5270 1. $t \leftarrow b$ (\textit{mp\_set}) \\ | |
5271 2. $c \leftarrow a + t$ \\ | |
5272 3. Return(\textit{MP\_OKAY}) \\ | |
5273 \hline | |
5274 \end{tabular} | |
5275 \end{center} | |
5276 \end{small} | |
5277 \caption{Algorithm mp\_add\_d} | |
5278 \end{figure} | |
5279 | |
5280 \textbf{Algorithm mp\_add\_d.} | |
5281 This algorithm initiates a temporary mp\_int with the value of the single digit and uses algorithm mp\_add to add the two values together. | |
5282 | |
5283 EXAM,bn_mp_add_d.c | |
5284 | |
5285 Clever use of the letter 't'. | |
5286 | |
5287 \subsubsection{Subtraction} | |
5288 The single digit subtraction algorithm mp\_sub\_d is essentially the same except it uses mp\_sub to subtract the digit from the mp\_int. | |
5289 | |
5290 \subsection{Single Digit Multiplication} | |
5291 Single digit multiplication arises enough in division and radix conversion that it ought to be implement as a special case of the baseline | |
5292 multiplication algorithm. Essentially this algorithm is a modified version of algorithm s\_mp\_mul\_digs where one of the multiplicands | |
5293 only has one digit. | |
5294 | |
5295 \begin{figure}[!here] | |
5296 \begin{small} | |
5297 \begin{center} | |
5298 \begin{tabular}{l} | |
5299 \hline Algorithm \textbf{mp\_mul\_d}. \\ | |
5300 \textbf{Input}. mp\_int $a$ and a mp\_digit $b$ \\ | |
5301 \textbf{Output}. $c = ab$ \\ | |
5302 \hline \\ | |
5303 1. $pa \leftarrow a.used$ \\ | |
5304 2. Grow $c$ to at least $pa + 1$ digits. \\ | |
5305 3. $oldused \leftarrow c.used$ \\ | |
5306 4. $c.used \leftarrow pa + 1$ \\ | |
5307 5. $c.sign \leftarrow a.sign$ \\ | |
5308 6. $\mu \leftarrow 0$ \\ | |
5309 7. for $ix$ from $0$ to $pa - 1$ do \\ | |
5310 \hspace{3mm}7.1 $\hat r \leftarrow \mu + a_{ix}b$ \\ | |
5311 \hspace{3mm}7.2 $c_{ix} \leftarrow \hat r \mbox{ (mod }\beta\mbox{)}$ \\ | |
5312 \hspace{3mm}7.3 $\mu \leftarrow \lfloor \hat r / \beta \rfloor$ \\ | |
5313 8. $c_{pa} \leftarrow \mu$ \\ | |
5314 9. for $ix$ from $pa + 1$ to $oldused$ do \\ | |
5315 \hspace{3mm}9.1 $c_{ix} \leftarrow 0$ \\ | |
5316 10. Clamp excess digits of $c$. \\ | |
5317 11. Return(\textit{MP\_OKAY}). \\ | |
5318 \hline | |
5319 \end{tabular} | |
5320 \end{center} | |
5321 \end{small} | |
5322 \caption{Algorithm mp\_mul\_d} | |
5323 \end{figure} | |
5324 \textbf{Algorithm mp\_mul\_d.} | |
5325 This algorithm quickly multiplies an mp\_int by a small single digit value. It is specially tailored to the job and has a minimal of overhead. | |
5326 Unlike the full multiplication algorithms this algorithm does not require any significnat temporary storage or memory allocations. | |
5327 | |
5328 EXAM,bn_mp_mul_d.c | |
5329 | |
5330 In this implementation the destination $c$ may point to the same mp\_int as the source $a$ since the result is written after the digit is | |
5331 read from the source. This function uses pointer aliases $tmpa$ and $tmpc$ for the digits of $a$ and $c$ respectively. | |
5332 | |
5333 \subsection{Single Digit Division} | |
5334 Like the single digit multiplication algorithm, single digit division is also a fairly common algorithm used in radix conversion. Since the | |
5335 divisor is only a single digit a specialized variant of the division algorithm can be used to compute the quotient. | |
5336 | |
5337 \newpage\begin{figure}[!here] | |
5338 \begin{small} | |
5339 \begin{center} | |
5340 \begin{tabular}{l} | |
5341 \hline Algorithm \textbf{mp\_div\_d}. \\ | |
5342 \textbf{Input}. mp\_int $a$ and a mp\_digit $b$ \\ | |
5343 \textbf{Output}. $c = \lfloor a / b \rfloor, d = a - cb$ \\ | |
5344 \hline \\ | |
5345 1. If $b = 0$ then return(\textit{MP\_VAL}).\\ | |
5346 2. If $b = 3$ then use algorithm mp\_div\_3 instead. \\ | |
5347 3. Init $q$ to $a.used$ digits. \\ | |
5348 4. $q.used \leftarrow a.used$ \\ | |
5349 5. $q.sign \leftarrow a.sign$ \\ | |
5350 6. $\hat w \leftarrow 0$ \\ | |
5351 7. for $ix$ from $a.used - 1$ down to $0$ do \\ | |
5352 \hspace{3mm}7.1 $\hat w \leftarrow \hat w \beta + a_{ix}$ \\ | |
5353 \hspace{3mm}7.2 If $\hat w \ge b$ then \\ | |
5354 \hspace{6mm}7.2.1 $t \leftarrow \lfloor \hat w / b \rfloor$ \\ | |
5355 \hspace{6mm}7.2.2 $\hat w \leftarrow \hat w \mbox{ (mod }b\mbox{)}$ \\ | |
5356 \hspace{3mm}7.3 else\\ | |
5357 \hspace{6mm}7.3.1 $t \leftarrow 0$ \\ | |
5358 \hspace{3mm}7.4 $q_{ix} \leftarrow t$ \\ | |
5359 8. $d \leftarrow \hat w$ \\ | |
5360 9. Clamp excess digits of $q$. \\ | |
5361 10. $c \leftarrow q$ \\ | |
5362 11. Return(\textit{MP\_OKAY}). \\ | |
5363 \hline | |
5364 \end{tabular} | |
5365 \end{center} | |
5366 \end{small} | |
5367 \caption{Algorithm mp\_div\_d} | |
5368 \end{figure} | |
5369 \textbf{Algorithm mp\_div\_d.} | |
5370 This algorithm divides the mp\_int $a$ by the single mp\_digit $b$ using an optimized approach. Essentially in every iteration of the | |
5371 algorithm another digit of the dividend is reduced and another digit of quotient produced. Provided $b < \beta$ the value of $\hat w$ | |
5372 after step 7.1 will be limited such that $0 \le \lfloor \hat w / b \rfloor < \beta$. | |
5373 | |
5374 If the divisor $b$ is equal to three a variant of this algorithm is used which is called mp\_div\_3. It replaces the division by three with | |
5375 a multiplication by $\lfloor \beta / 3 \rfloor$ and the appropriate shift and residual fixup. In essence it is much like the Barrett reduction | |
5376 from chapter seven. | |
5377 | |
5378 EXAM,bn_mp_div_d.c | |
5379 | |
5380 Like the implementation of algorithm mp\_div this algorithm allows either of the quotient or remainder to be passed as a \textbf{NULL} pointer to | |
5381 indicate the respective value is not required. This allows a trivial single digit modular reduction algorithm, mp\_mod\_d to be created. | |
5382 | |
5383 The division and remainder on lines @44,/@ and @45,%@ can be replaced often by a single division on most processors. For example, the 32-bit x86 based | |
5384 processors can divide a 64-bit quantity by a 32-bit quantity and produce the quotient and remainder simultaneously. Unfortunately the GCC | |
5385 compiler does not recognize that optimization and will actually produce two function calls to find the quotient and remainder respectively. | |
5386 | |
5387 \subsection{Single Digit Root Extraction} | |
5388 | |
5389 Finding the $n$'th root of an integer is fairly easy as far as numerical analysis is concerned. Algorithms such as the Newton-Raphson approximation | |
5390 (\ref{eqn:newton}) series will converge very quickly to a root for any continuous function $f(x)$. | |
5391 | |
5392 \begin{equation} | |
5393 x_{i+1} = x_i - {f(x_i) \over f'(x_i)} | |
5394 \label{eqn:newton} | |
5395 \end{equation} | |
5396 | |
5397 In this case the $n$'th root is desired and $f(x) = x^n - a$ where $a$ is the integer of which the root is desired. The derivative of $f(x)$ is | |
5398 simply $f'(x) = nx^{n - 1}$. Of particular importance is that this algorithm will be used over the integers not over the a more continuous domain | |
5399 such as the real numbers. As a result the root found can be above the true root by few and must be manually adjusted. Ideally at the end of the | |
5400 algorithm the $n$'th root $b$ of an integer $a$ is desired such that $b^n \le a$. | |
5401 | |
5402 \newpage\begin{figure}[!here] | |
5403 \begin{small} | |
5404 \begin{center} | |
5405 \begin{tabular}{l} | |
5406 \hline Algorithm \textbf{mp\_n\_root}. \\ | |
5407 \textbf{Input}. mp\_int $a$ and a mp\_digit $b$ \\ | |
5408 \textbf{Output}. $c^b \le a$ \\ | |
5409 \hline \\ | |
5410 1. If $b$ is even and $a.sign = MP\_NEG$ return(\textit{MP\_VAL}). \\ | |
5411 2. $sign \leftarrow a.sign$ \\ | |
5412 3. $a.sign \leftarrow MP\_ZPOS$ \\ | |
5413 4. t$2 \leftarrow 2$ \\ | |
5414 5. Loop \\ | |
5415 \hspace{3mm}5.1 t$1 \leftarrow $ t$2$ \\ | |
5416 \hspace{3mm}5.2 t$3 \leftarrow $ t$1^{b - 1}$ \\ | |
5417 \hspace{3mm}5.3 t$2 \leftarrow $ t$3 $ $\cdot$ t$1$ \\ | |
5418 \hspace{3mm}5.4 t$2 \leftarrow $ t$2 - a$ \\ | |
5419 \hspace{3mm}5.5 t$3 \leftarrow $ t$3 \cdot b$ \\ | |
5420 \hspace{3mm}5.6 t$3 \leftarrow \lfloor $t$2 / $t$3 \rfloor$ \\ | |
5421 \hspace{3mm}5.7 t$2 \leftarrow $ t$1 - $ t$3$ \\ | |
5422 \hspace{3mm}5.8 If t$1 \ne $ t$2$ then goto step 5. \\ | |
5423 6. Loop \\ | |
5424 \hspace{3mm}6.1 t$2 \leftarrow $ t$1^b$ \\ | |
5425 \hspace{3mm}6.2 If t$2 > a$ then \\ | |
5426 \hspace{6mm}6.2.1 t$1 \leftarrow $ t$1 - 1$ \\ | |
5427 \hspace{6mm}6.2.2 Goto step 6. \\ | |
5428 7. $a.sign \leftarrow sign$ \\ | |
5429 8. $c \leftarrow $ t$1$ \\ | |
5430 9. $c.sign \leftarrow sign$ \\ | |
5431 10. Return(\textit{MP\_OKAY}). \\ | |
5432 \hline | |
5433 \end{tabular} | |
5434 \end{center} | |
5435 \end{small} | |
5436 \caption{Algorithm mp\_n\_root} | |
5437 \end{figure} | |
5438 \textbf{Algorithm mp\_n\_root.} | |
5439 This algorithm finds the integer $n$'th root of an input using the Newton-Raphson approach. It is partially optimized based on the observation | |
5440 that the numerator of ${f(x) \over f'(x)}$ can be derived from a partial denominator. That is at first the denominator is calculated by finding | |
5441 $x^{b - 1}$. This value can then be multiplied by $x$ and have $a$ subtracted from it to find the numerator. This saves a total of $b - 1$ | |
5442 multiplications by t$1$ inside the loop. | |
5443 | |
5444 The initial value of the approximation is t$2 = 2$ which allows the algorithm to start with very small values and quickly converge on the | |
5445 root. Ideally this algorithm is meant to find the $n$'th root of an input where $n$ is bounded by $2 \le n \le 5$. | |
5446 | |
5447 EXAM,bn_mp_n_root.c | |
5448 | |
5449 \section{Random Number Generation} | |
5450 | |
5451 Random numbers come up in a variety of activities from public key cryptography to simple simulations and various randomized algorithms. Pollard-Rho | |
5452 factoring for example, can make use of random values as starting points to find factors of a composite integer. In this case the algorithm presented | |
5453 is solely for simulations and not intended for cryptographic use. | |
5454 | |
5455 \newpage\begin{figure}[!here] | |
5456 \begin{small} | |
5457 \begin{center} | |
5458 \begin{tabular}{l} | |
5459 \hline Algorithm \textbf{mp\_rand}. \\ | |
5460 \textbf{Input}. An integer $b$ \\ | |
5461 \textbf{Output}. A pseudo-random number of $b$ digits \\ | |
5462 \hline \\ | |
5463 1. $a \leftarrow 0$ \\ | |
5464 2. If $b \le 0$ return(\textit{MP\_OKAY}) \\ | |
5465 3. Pick a non-zero random digit $d$. \\ | |
5466 4. $a \leftarrow a + d$ \\ | |
5467 5. for $ix$ from 1 to $d - 1$ do \\ | |
5468 \hspace{3mm}5.1 $a \leftarrow a \cdot \beta$ \\ | |
5469 \hspace{3mm}5.2 Pick a random digit $d$. \\ | |
5470 \hspace{3mm}5.3 $a \leftarrow a + d$ \\ | |
5471 6. Return(\textit{MP\_OKAY}). \\ | |
5472 \hline | |
5473 \end{tabular} | |
5474 \end{center} | |
5475 \end{small} | |
5476 \caption{Algorithm mp\_rand} | |
5477 \end{figure} | |
5478 \textbf{Algorithm mp\_rand.} | |
5479 This algorithm produces a pseudo-random integer of $b$ digits. By ensuring that the first digit is non-zero the algorithm also guarantees that the | |
5480 final result has at least $b$ digits. It relies heavily on a third-part random number generator which should ideally generate uniformly all of | |
5481 the integers from $0$ to $\beta - 1$. | |
5482 | |
5483 EXAM,bn_mp_rand.c | |
5484 | |
5485 \section{Formatted Representations} | |
5486 The ability to emit a radix-$n$ textual representation of an integer is useful for interacting with human parties. For example, the ability to | |
5487 be given a string of characters such as ``114585'' and turn it into the radix-$\beta$ equivalent would make it easier to enter numbers | |
5488 into a program. | |
5489 | |
5490 \subsection{Reading Radix-n Input} | |
5491 For the purposes of this text we will assume that a simple lower ASCII map (\ref{fig:ASC}) is used for the values of from $0$ to $63$ to | |
5492 printable characters. For example, when the character ``N'' is read it represents the integer $23$. The first $16$ characters of the | |
5493 map are for the common representations up to hexadecimal. After that they match the ``base64'' encoding scheme which are suitable chosen | |
5494 such that they are printable. While outputting as base64 may not be too helpful for human operators it does allow communication via non binary | |
5495 mediums. | |
5496 | |
5497 \newpage\begin{figure}[here] | |
5498 \begin{center} | |
5499 \begin{tabular}{cc|cc|cc|cc} | |
5500 \hline \textbf{Value} & \textbf{Char} & \textbf{Value} & \textbf{Char} & \textbf{Value} & \textbf{Char} & \textbf{Value} & \textbf{Char} \\ | |
5501 \hline | |
5502 0 & 0 & 1 & 1 & 2 & 2 & 3 & 3 \\ | |
5503 4 & 4 & 5 & 5 & 6 & 6 & 7 & 7 \\ | |
5504 8 & 8 & 9 & 9 & 10 & A & 11 & B \\ | |
5505 12 & C & 13 & D & 14 & E & 15 & F \\ | |
5506 16 & G & 17 & H & 18 & I & 19 & J \\ | |
5507 20 & K & 21 & L & 22 & M & 23 & N \\ | |
5508 24 & O & 25 & P & 26 & Q & 27 & R \\ | |
5509 28 & S & 29 & T & 30 & U & 31 & V \\ | |
5510 32 & W & 33 & X & 34 & Y & 35 & Z \\ | |
5511 36 & a & 37 & b & 38 & c & 39 & d \\ | |
5512 40 & e & 41 & f & 42 & g & 43 & h \\ | |
5513 44 & i & 45 & j & 46 & k & 47 & l \\ | |
5514 48 & m & 49 & n & 50 & o & 51 & p \\ | |
5515 52 & q & 53 & r & 54 & s & 55 & t \\ | |
5516 56 & u & 57 & v & 58 & w & 59 & x \\ | |
5517 60 & y & 61 & z & 62 & $+$ & 63 & $/$ \\ | |
5518 \hline | |
5519 \end{tabular} | |
5520 \end{center} | |
5521 \caption{Lower ASCII Map} | |
5522 \label{fig:ASC} | |
5523 \end{figure} | |
5524 | |
5525 \newpage\begin{figure}[!here] | |
5526 \begin{small} | |
5527 \begin{center} | |
5528 \begin{tabular}{l} | |
5529 \hline Algorithm \textbf{mp\_read\_radix}. \\ | |
5530 \textbf{Input}. A string $str$ of length $sn$ and radix $r$. \\ | |
5531 \textbf{Output}. The radix-$\beta$ equivalent mp\_int. \\ | |
5532 \hline \\ | |
5533 1. If $r < 2$ or $r > 64$ return(\textit{MP\_VAL}). \\ | |
5534 2. $ix \leftarrow 0$ \\ | |
5535 3. If $str_0 =$ ``-'' then do \\ | |
5536 \hspace{3mm}3.1 $ix \leftarrow ix + 1$ \\ | |
5537 \hspace{3mm}3.2 $sign \leftarrow MP\_NEG$ \\ | |
5538 4. else \\ | |
5539 \hspace{3mm}4.1 $sign \leftarrow MP\_ZPOS$ \\ | |
5540 5. $a \leftarrow 0$ \\ | |
5541 6. for $iy$ from $ix$ to $sn - 1$ do \\ | |
5542 \hspace{3mm}6.1 Let $y$ denote the position in the map of $str_{iy}$. \\ | |
5543 \hspace{3mm}6.2 If $str_{iy}$ is not in the map or $y \ge r$ then goto step 7. \\ | |
5544 \hspace{3mm}6.3 $a \leftarrow a \cdot r$ \\ | |
5545 \hspace{3mm}6.4 $a \leftarrow a + y$ \\ | |
5546 7. If $a \ne 0$ then $a.sign \leftarrow sign$ \\ | |
5547 8. Return(\textit{MP\_OKAY}). \\ | |
5548 \hline | |
5549 \end{tabular} | |
5550 \end{center} | |
5551 \end{small} | |
5552 \caption{Algorithm mp\_read\_radix} | |
5553 \end{figure} | |
5554 \textbf{Algorithm mp\_read\_radix.} | |
5555 This algorithm will read an ASCII string and produce the radix-$\beta$ mp\_int representation of the same integer. A minus symbol ``-'' may precede the | |
5556 string to indicate the value is negative, otherwise it is assumed to be positive. The algorithm will read up to $sn$ characters from the input | |
5557 and will stop when it reads a character it cannot map the algorithm stops reading characters from the string. This allows numbers to be embedded | |
5558 as part of larger input without any significant problem. | |
5559 | |
5560 EXAM,bn_mp_read_radix.c | |
5561 | |
5562 \subsection{Generating Radix-$n$ Output} | |
5563 Generating radix-$n$ output is fairly trivial with a division and remainder algorithm. | |
5564 | |
5565 \newpage\begin{figure}[!here] | |
5566 \begin{small} | |
5567 \begin{center} | |
5568 \begin{tabular}{l} | |
5569 \hline Algorithm \textbf{mp\_toradix}. \\ | |
5570 \textbf{Input}. A mp\_int $a$ and an integer $r$\\ | |
5571 \textbf{Output}. The radix-$r$ representation of $a$ \\ | |
5572 \hline \\ | |
5573 1. If $r < 2$ or $r > 64$ return(\textit{MP\_VAL}). \\ | |
5574 2. If $a = 0$ then $str = $ ``$0$'' and return(\textit{MP\_OKAY}). \\ | |
5575 3. $t \leftarrow a$ \\ | |
5576 4. $str \leftarrow$ ``'' \\ | |
5577 5. if $t.sign = MP\_NEG$ then \\ | |
5578 \hspace{3mm}5.1 $str \leftarrow str + $ ``-'' \\ | |
5579 \hspace{3mm}5.2 $t.sign = MP\_ZPOS$ \\ | |
5580 6. While ($t \ne 0$) do \\ | |
5581 \hspace{3mm}6.1 $d \leftarrow t \mbox{ (mod }r\mbox{)}$ \\ | |
5582 \hspace{3mm}6.2 $t \leftarrow \lfloor t / r \rfloor$ \\ | |
5583 \hspace{3mm}6.3 Look up $d$ in the map and store the equivalent character in $y$. \\ | |
5584 \hspace{3mm}6.4 $str \leftarrow str + y$ \\ | |
5585 7. If $str_0 = $``$-$'' then \\ | |
5586 \hspace{3mm}7.1 Reverse the digits $str_1, str_2, \ldots str_n$. \\ | |
5587 8. Otherwise \\ | |
5588 \hspace{3mm}8.1 Reverse the digits $str_0, str_1, \ldots str_n$. \\ | |
5589 9. Return(\textit{MP\_OKAY}).\\ | |
5590 \hline | |
5591 \end{tabular} | |
5592 \end{center} | |
5593 \end{small} | |
5594 \caption{Algorithm mp\_toradix} | |
5595 \end{figure} | |
5596 \textbf{Algorithm mp\_toradix.} | |
5597 This algorithm computes the radix-$r$ representation of an mp\_int $a$. The ``digits'' of the representation are extracted by reducing | |
5598 successive powers of $\lfloor a / r^k \rfloor$ the input modulo $r$ until $r^k > a$. Note that instead of actually dividing by $r^k$ in | |
5599 each iteration the quotient $\lfloor a / r \rfloor$ is saved for the next iteration. As a result a series of trivial $n \times 1$ divisions | |
5600 are required instead of a series of $n \times k$ divisions. One design flaw of this approach is that the digits are produced in the reverse order | |
5601 (see~\ref{fig:mpradix}). To remedy this flaw the digits must be swapped or simply ``reversed''. | |
5602 | |
5603 \begin{figure} | |
5604 \begin{center} | |
5605 \begin{tabular}{|c|c|c|} | |
5606 \hline \textbf{Value of $a$} & \textbf{Value of $d$} & \textbf{Value of $str$} \\ | |
5607 \hline $1234$ & -- & -- \\ | |
5608 \hline $123$ & $4$ & ``4'' \\ | |
5609 \hline $12$ & $3$ & ``43'' \\ | |
5610 \hline $1$ & $2$ & ``432'' \\ | |
5611 \hline $0$ & $1$ & ``4321'' \\ | |
5612 \hline | |
5613 \end{tabular} | |
5614 \end{center} | |
5615 \caption{Example of Algorithm mp\_toradix.} | |
5616 \label{fig:mpradix} | |
5617 \end{figure} | |
5618 | |
5619 EXAM,bn_mp_toradix.c | |
5620 | |
5621 \chapter{Number Theoretic Algorithms} | |
5622 This chapter discusses several fundamental number theoretic algorithms such as the greatest common divisor, least common multiple and Jacobi | |
5623 symbol computation. These algorithms arise as essential components in several key cryptographic algorithms such as the RSA public key algorithm and | |
5624 various Sieve based factoring algorithms. | |
5625 | |
5626 \section{Greatest Common Divisor} | |
5627 The greatest common divisor of two integers $a$ and $b$, often denoted as $(a, b)$ is the largest integer $k$ that is a proper divisor of | |
5628 both $a$ and $b$. That is, $k$ is the largest integer such that $0 \equiv a \mbox{ (mod }k\mbox{)}$ and $0 \equiv b \mbox{ (mod }k\mbox{)}$ occur | |
5629 simultaneously. | |
5630 | |
5631 The most common approach (cite) is to reduce one input modulo another. That is if $a$ and $b$ are divisible by some integer $k$ and if $qa + r = b$ then | |
5632 $r$ is also divisible by $k$. The reduction pattern follows $\left < a , b \right > \rightarrow \left < b, a \mbox{ mod } b \right >$. | |
5633 | |
5634 \newpage\begin{figure}[!here] | |
5635 \begin{small} | |
5636 \begin{center} | |
5637 \begin{tabular}{l} | |
5638 \hline Algorithm \textbf{Greatest Common Divisor (I)}. \\ | |
5639 \textbf{Input}. Two positive integers $a$ and $b$ greater than zero. \\ | |
5640 \textbf{Output}. The greatest common divisor $(a, b)$. \\ | |
5641 \hline \\ | |
5642 1. While ($b > 0$) do \\ | |
5643 \hspace{3mm}1.1 $r \leftarrow a \mbox{ (mod }b\mbox{)}$ \\ | |
5644 \hspace{3mm}1.2 $a \leftarrow b$ \\ | |
5645 \hspace{3mm}1.3 $b \leftarrow r$ \\ | |
5646 2. Return($a$). \\ | |
5647 \hline | |
5648 \end{tabular} | |
5649 \end{center} | |
5650 \end{small} | |
5651 \caption{Algorithm Greatest Common Divisor (I)} | |
5652 \label{fig:gcd1} | |
5653 \end{figure} | |
5654 | |
5655 This algorithm will quickly converge on the greatest common divisor since the residue $r$ tends diminish rapidly. However, divisions are | |
5656 relatively expensive operations to perform and should ideally be avoided. There is another approach based on a similar relationship of | |
5657 greatest common divisors. The faster approach is based on the observation that if $k$ divides both $a$ and $b$ it will also divide $a - b$. | |
5658 In particular, we would like $a - b$ to decrease in magnitude which implies that $b \ge a$. | |
5659 | |
5660 \begin{figure}[!here] | |
5661 \begin{small} | |
5662 \begin{center} | |
5663 \begin{tabular}{l} | |
5664 \hline Algorithm \textbf{Greatest Common Divisor (II)}. \\ | |
5665 \textbf{Input}. Two positive integers $a$ and $b$ greater than zero. \\ | |
5666 \textbf{Output}. The greatest common divisor $(a, b)$. \\ | |
5667 \hline \\ | |
5668 1. While ($b > 0$) do \\ | |
5669 \hspace{3mm}1.1 Swap $a$ and $b$ such that $a$ is the smallest of the two. \\ | |
5670 \hspace{3mm}1.2 $b \leftarrow b - a$ \\ | |
5671 2. Return($a$). \\ | |
5672 \hline | |
5673 \end{tabular} | |
5674 \end{center} | |
5675 \end{small} | |
5676 \caption{Algorithm Greatest Common Divisor (II)} | |
5677 \label{fig:gcd2} | |
5678 \end{figure} | |
5679 | |
5680 \textbf{Proof} \textit{Algorithm~\ref{fig:gcd2} will return the greatest common divisor of $a$ and $b$.} | |
5681 The algorithm in figure~\ref{fig:gcd2} will eventually terminate since $b \ge a$ the subtraction in step 1.2 will be a value less than $b$. In other | |
5682 words in every iteration that tuple $\left < a, b \right >$ decrease in magnitude until eventually $a = b$. Since both $a$ and $b$ are always | |
5683 divisible by the greatest common divisor (\textit{until the last iteration}) and in the last iteration of the algorithm $b = 0$, therefore, in the | |
5684 second to last iteration of the algorithm $b = a$ and clearly $(a, a) = a$ which concludes the proof. \textbf{QED}. | |
5685 | |
5686 As a matter of practicality algorithm \ref{fig:gcd1} decreases far too slowly to be useful. Specially if $b$ is much larger than $a$ such that | |
5687 $b - a$ is still very much larger than $a$. A simple addition to the algorithm is to divide $b - a$ by a power of some integer $p$ which does | |
5688 not divide the greatest common divisor but will divide $b - a$. In this case ${b - a} \over p$ is also an integer and still divisible by | |
5689 the greatest common divisor. | |
5690 | |
5691 However, instead of factoring $b - a$ to find a suitable value of $p$ the powers of $p$ can be removed from $a$ and $b$ that are in common first. | |
5692 Then inside the loop whenever $b - a$ is divisible by some power of $p$ it can be safely removed. | |
5693 | |
5694 \begin{figure}[!here] | |
5695 \begin{small} | |
5696 \begin{center} | |
5697 \begin{tabular}{l} | |
5698 \hline Algorithm \textbf{Greatest Common Divisor (III)}. \\ | |
5699 \textbf{Input}. Two positive integers $a$ and $b$ greater than zero. \\ | |
5700 \textbf{Output}. The greatest common divisor $(a, b)$. \\ | |
5701 \hline \\ | |
5702 1. $k \leftarrow 0$ \\ | |
5703 2. While $a$ and $b$ are both divisible by $p$ do \\ | |
5704 \hspace{3mm}2.1 $a \leftarrow \lfloor a / p \rfloor$ \\ | |
5705 \hspace{3mm}2.2 $b \leftarrow \lfloor b / p \rfloor$ \\ | |
5706 \hspace{3mm}2.3 $k \leftarrow k + 1$ \\ | |
5707 3. While $a$ is divisible by $p$ do \\ | |
5708 \hspace{3mm}3.1 $a \leftarrow \lfloor a / p \rfloor$ \\ | |
5709 4. While $b$ is divisible by $p$ do \\ | |
5710 \hspace{3mm}4.1 $b \leftarrow \lfloor b / p \rfloor$ \\ | |
5711 5. While ($b > 0$) do \\ | |
5712 \hspace{3mm}5.1 Swap $a$ and $b$ such that $a$ is the smallest of the two. \\ | |
5713 \hspace{3mm}5.2 $b \leftarrow b - a$ \\ | |
5714 \hspace{3mm}5.3 While $b$ is divisible by $p$ do \\ | |
5715 \hspace{6mm}5.3.1 $b \leftarrow \lfloor b / p \rfloor$ \\ | |
5716 6. Return($a \cdot p^k$). \\ | |
5717 \hline | |
5718 \end{tabular} | |
5719 \end{center} | |
5720 \end{small} | |
5721 \caption{Algorithm Greatest Common Divisor (III)} | |
5722 \label{fig:gcd3} | |
5723 \end{figure} | |
5724 | |
5725 This algorithm is based on the first except it removes powers of $p$ first and inside the main loop to ensure the tuple $\left < a, b \right >$ | |
5726 decreases more rapidly. The first loop on step two removes powers of $p$ that are in common. A count, $k$, is kept which will present a common | |
5727 divisor of $p^k$. After step two the remaining common divisor of $a$ and $b$ cannot be divisible by $p$. This means that $p$ can be safely | |
5728 divided out of the difference $b - a$ so long as the division leaves no remainder. | |
5729 | |
5730 In particular the value of $p$ should be chosen such that the division on step 5.3.1 occur often. It also helps that division by $p$ be easy | |
5731 to compute. The ideal choice of $p$ is two since division by two amounts to a right logical shift. Another important observation is that by | |
5732 step five both $a$ and $b$ are odd. Therefore, the diffrence $b - a$ must be even which means that each iteration removes one bit from the | |
5733 largest of the pair. | |
5734 | |
5735 \subsection{Complete Greatest Common Divisor} | |
5736 The algorithms presented so far cannot handle inputs which are zero or negative. The following algorithm can handle all input cases properly | |
5737 and will produce the greatest common divisor. | |
5738 | |
5739 \newpage\begin{figure}[!here] | |
5740 \begin{small} | |
5741 \begin{center} | |
5742 \begin{tabular}{l} | |
5743 \hline Algorithm \textbf{mp\_gcd}. \\ | |
5744 \textbf{Input}. mp\_int $a$ and $b$ \\ | |
5745 \textbf{Output}. The greatest common divisor $c = (a, b)$. \\ | |
5746 \hline \\ | |
5747 1. If $a = 0$ and $b \ne 0$ then \\ | |
5748 \hspace{3mm}1.1 $c \leftarrow b$ \\ | |
5749 \hspace{3mm}1.2 Return(\textit{MP\_OKAY}). \\ | |
5750 2. If $a \ne 0$ and $b = 0$ then \\ | |
5751 \hspace{3mm}2.1 $c \leftarrow a$ \\ | |
5752 \hspace{3mm}2.2 Return(\textit{MP\_OKAY}). \\ | |
5753 3. If $a = b = 0$ then \\ | |
5754 \hspace{3mm}3.1 $c \leftarrow 1$ \\ | |
5755 \hspace{3mm}3.2 Return(\textit{MP\_OKAY}). \\ | |
5756 4. $u \leftarrow \vert a \vert, v \leftarrow \vert b \vert$ \\ | |
5757 5. $k \leftarrow 0$ \\ | |
5758 6. While $u.used > 0$ and $v.used > 0$ and $u_0 \equiv v_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
5759 \hspace{3mm}6.1 $k \leftarrow k + 1$ \\ | |
5760 \hspace{3mm}6.2 $u \leftarrow \lfloor u / 2 \rfloor$ \\ | |
5761 \hspace{3mm}6.3 $v \leftarrow \lfloor v / 2 \rfloor$ \\ | |
5762 7. While $u.used > 0$ and $u_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
5763 \hspace{3mm}7.1 $u \leftarrow \lfloor u / 2 \rfloor$ \\ | |
5764 8. While $v.used > 0$ and $v_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
5765 \hspace{3mm}8.1 $v \leftarrow \lfloor v / 2 \rfloor$ \\ | |
5766 9. While $v.used > 0$ \\ | |
5767 \hspace{3mm}9.1 If $\vert u \vert > \vert v \vert$ then \\ | |
5768 \hspace{6mm}9.1.1 Swap $u$ and $v$. \\ | |
5769 \hspace{3mm}9.2 $v \leftarrow \vert v \vert - \vert u \vert$ \\ | |
5770 \hspace{3mm}9.3 While $v.used > 0$ and $v_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
5771 \hspace{6mm}9.3.1 $v \leftarrow \lfloor v / 2 \rfloor$ \\ | |
5772 10. $c \leftarrow u \cdot 2^k$ \\ | |
5773 11. Return(\textit{MP\_OKAY}). \\ | |
5774 \hline | |
5775 \end{tabular} | |
5776 \end{center} | |
5777 \end{small} | |
5778 \caption{Algorithm mp\_gcd} | |
5779 \end{figure} | |
5780 \textbf{Algorithm mp\_gcd.} | |
5781 This algorithm will produce the greatest common divisor of two mp\_ints $a$ and $b$. The algorithm was originally based on Algorithm B of | |
5782 Knuth \cite[pp. 338]{TAOCPV2} but has been modified to be simpler to explain. In theory it achieves the same asymptotic working time as | |
5783 Algorithm B and in practice this appears to be true. | |
5784 | |
5785 The first three steps handle the cases where either one of or both inputs are zero. If either input is zero the greatest common divisor is the | |
5786 largest input or zero if they are both zero. If the inputs are not trivial than $u$ and $v$ are assigned the absolute values of | |
5787 $a$ and $b$ respectively and the algorithm will proceed to reduce the pair. | |
5788 | |
5789 Step six will divide out any common factors of two and keep track of the count in the variable $k$. After this step two is no longer a | |
5790 factor of the remaining greatest common divisor between $u$ and $v$ and can be safely evenly divided out of either whenever they are even. Step | |
5791 seven and eight ensure that the $u$ and $v$ respectively have no more factors of two. At most only one of the while loops will iterate since | |
5792 they cannot both be even. | |
5793 | |
5794 By step nine both of $u$ and $v$ are odd which is required for the inner logic. First the pair are swapped such that $v$ is equal to | |
5795 or greater than $u$. This ensures that the subtraction on step 9.2 will always produce a positive and even result. Step 9.3 removes any | |
5796 factors of two from the difference $u$ to ensure that in the next iteration of the loop both are once again odd. | |
5797 | |
5798 After $v = 0$ occurs the variable $u$ has the greatest common divisor of the pair $\left < u, v \right >$ just after step six. The result | |
5799 must be adjusted by multiplying by the common factors of two ($2^k$) removed earlier. | |
5800 | |
5801 EXAM,bn_mp_gcd.c | |
5802 | |
5803 This function makes use of the macros mp\_iszero and mp\_iseven. The former evaluates to $1$ if the input mp\_int is equivalent to the | |
5804 integer zero otherwise it evaluates to $0$. The latter evaluates to $1$ if the input mp\_int represents a non-zero even integer otherwise | |
5805 it evaluates to $0$. Note that just because mp\_iseven may evaluate to $0$ does not mean the input is odd, it could also be zero. The three | |
5806 trivial cases of inputs are handled on lines @25,zero@ through @34,}@. After those lines the inputs are assumed to be non-zero. | |
5807 | |
5808 Lines @36,if@ and @40,if@ make local copies $u$ and $v$ of the inputs $a$ and $b$ respectively. At this point the common factors of two | |
5809 must be divided out of the two inputs. The while loop on line @49,while@ iterates so long as both are even. The local integer $k$ is used to | |
5810 keep track of how many factors of $2$ are pulled out of both values. It is assumed that the number of factors will not exceed the maximum | |
5811 value of a C ``int'' data type\footnote{Strictly speaking no array in C may have more than entries than are accessible by an ``int'' so this is not | |
5812 a limitation.}. | |
5813 | |
5814 At this point there are no more common factors of two in the two values. The while loops on lines @60,while@ and @65,while@ remove any independent | |
5815 factors of two such that both $u$ and $v$ are guaranteed to be an odd integer before hitting the main body of the algorithm. The while loop | |
5816 on line @71, while@ performs the reduction of the pair until $v$ is equal to zero. The unsigned comparison and subtraction algorithms are used in | |
5817 place of the full signed routines since both values are guaranteed to be positive and the result of the subtraction is guaranteed to be non-negative. | |
5818 | |
5819 \section{Least Common Multiple} | |
5820 The least common multiple of a pair of integers is their product divided by their greatest common divisor. For two integers $a$ and $b$ the | |
5821 least common multiple is normally denoted as $[ a, b ]$ and numerically equivalent to ${ab} \over {(a, b)}$. For example, if $a = 2 \cdot 2 \cdot 3 = 12$ | |
5822 and $b = 2 \cdot 3 \cdot 3 \cdot 7 = 126$ the least common multiple is ${126 \over {(12, 126)}} = {126 \over 6} = 21$. | |
5823 | |
5824 The least common multiple arises often in coding theory as well as number theory. If two functions have periods of $a$ and $b$ respectively they will | |
5825 collide, that is be in synchronous states, after only $[ a, b ]$ iterations. This is why, for example, random number generators based on | |
5826 Linear Feedback Shift Registers (LFSR) tend to use registers with periods which are co-prime (\textit{e.g. the greatest common divisor is one.}). | |
5827 Similarly in number theory if a composite $n$ has two prime factors $p$ and $q$ then maximal order of any unit of $\Z/n\Z$ will be $[ p - 1, q - 1] $. | |
5828 | |
5829 \begin{figure}[!here] | |
5830 \begin{small} | |
5831 \begin{center} | |
5832 \begin{tabular}{l} | |
5833 \hline Algorithm \textbf{mp\_lcm}. \\ | |
5834 \textbf{Input}. mp\_int $a$ and $b$ \\ | |
5835 \textbf{Output}. The least common multiple $c = [a, b]$. \\ | |
5836 \hline \\ | |
5837 1. $c \leftarrow (a, b)$ \\ | |
5838 2. $t \leftarrow a \cdot b$ \\ | |
5839 3. $c \leftarrow \lfloor t / c \rfloor$ \\ | |
5840 4. Return(\textit{MP\_OKAY}). \\ | |
5841 \hline | |
5842 \end{tabular} | |
5843 \end{center} | |
5844 \end{small} | |
5845 \caption{Algorithm mp\_lcm} | |
5846 \end{figure} | |
5847 \textbf{Algorithm mp\_lcm.} | |
5848 This algorithm computes the least common multiple of two mp\_int inputs $a$ and $b$. It computes the least common multiple directly by | |
5849 dividing the product of the two inputs by their greatest common divisor. | |
5850 | |
5851 EXAM,bn_mp_lcm.c | |
5852 | |
5853 \section{Jacobi Symbol Computation} | |
5854 To explain the Jacobi Symbol we shall first discuss the Legendre function\footnote{Arrg. What is the name of this?} off which the Jacobi symbol is | |
5855 defined. The Legendre function computes whether or not an integer $a$ is a quadratic residue modulo an odd prime $p$. Numerically it is | |
5856 equivalent to equation \ref{eqn:legendre}. | |
5857 | |
190
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
5858 \textit{-- Tom, don't be an ass, cite your source here...!} |
d8254fc979e9
Initial import of libtommath 0.35
Matt Johnston <matt@ucc.asn.au>
parents:
142
diff
changeset
|
5859 |
19 | 5860 \begin{equation} |
5861 a^{(p-1)/2} \equiv \begin{array}{rl} | |
5862 -1 & \mbox{if }a\mbox{ is a quadratic non-residue.} \\ | |
5863 0 & \mbox{if }a\mbox{ divides }p\mbox{.} \\ | |
5864 1 & \mbox{if }a\mbox{ is a quadratic residue}. | |
5865 \end{array} \mbox{ (mod }p\mbox{)} | |
5866 \label{eqn:legendre} | |
5867 \end{equation} | |
5868 | |
5869 \textbf{Proof.} \textit{Equation \ref{eqn:legendre} correctly identifies the residue status of an integer $a$ modulo a prime $p$.} | |
5870 An integer $a$ is a quadratic residue if the following equation has a solution. | |
5871 | |
5872 \begin{equation} | |
5873 x^2 \equiv a \mbox{ (mod }p\mbox{)} | |
5874 \label{eqn:root} | |
5875 \end{equation} | |
5876 | |
5877 Consider the following equation. | |
5878 | |
5879 \begin{equation} | |
5880 0 \equiv x^{p-1} - 1 \equiv \left \lbrace \left (x^2 \right )^{(p-1)/2} - a^{(p-1)/2} \right \rbrace + \left ( a^{(p-1)/2} - 1 \right ) \mbox{ (mod }p\mbox{)} | |
5881 \label{eqn:rooti} | |
5882 \end{equation} | |
5883 | |
5884 Whether equation \ref{eqn:root} has a solution or not equation \ref{eqn:rooti} is always true. If $a^{(p-1)/2} - 1 \equiv 0 \mbox{ (mod }p\mbox{)}$ | |
5885 then the quantity in the braces must be zero. By reduction, | |
5886 | |
5887 \begin{eqnarray} | |
5888 \left (x^2 \right )^{(p-1)/2} - a^{(p-1)/2} \equiv 0 \nonumber \\ | |
5889 \left (x^2 \right )^{(p-1)/2} \equiv a^{(p-1)/2} \nonumber \\ | |
5890 x^2 \equiv a \mbox{ (mod }p\mbox{)} | |
5891 \end{eqnarray} | |
5892 | |
5893 As a result there must be a solution to the quadratic equation and in turn $a$ must be a quadratic residue. If $a$ does not divide $p$ and $a$ | |
5894 is not a quadratic residue then the only other value $a^{(p-1)/2}$ may be congruent to is $-1$ since | |
5895 \begin{equation} | |
5896 0 \equiv a^{p - 1} - 1 \equiv (a^{(p-1)/2} + 1)(a^{(p-1)/2} - 1) \mbox{ (mod }p\mbox{)} | |
5897 \end{equation} | |
5898 One of the terms on the right hand side must be zero. \textbf{QED} | |
5899 | |
5900 \subsection{Jacobi Symbol} | |
5901 The Jacobi symbol is a generalization of the Legendre function for any odd non prime moduli $p$ greater than 2. If $p = \prod_{i=0}^n p_i$ then | |
5902 the Jacobi symbol $\left ( { a \over p } \right )$ is equal to the following equation. | |
5903 | |
5904 \begin{equation} | |
5905 \left ( { a \over p } \right ) = \left ( { a \over p_0} \right ) \left ( { a \over p_1} \right ) \ldots \left ( { a \over p_n} \right ) | |
5906 \end{equation} | |
5907 | |
5908 By inspection if $p$ is prime the Jacobi symbol is equivalent to the Legendre function. The following facts\footnote{See HAC \cite[pp. 72-74]{HAC} for | |
5909 further details.} will be used to derive an efficient Jacobi symbol algorithm. Where $p$ is an odd integer greater than two and $a, b \in \Z$ the | |
5910 following are true. | |
5911 | |
5912 \begin{enumerate} | |
5913 \item $\left ( { a \over p} \right )$ equals $-1$, $0$ or $1$. | |
5914 \item $\left ( { ab \over p} \right ) = \left ( { a \over p} \right )\left ( { b \over p} \right )$. | |
5915 \item If $a \equiv b$ then $\left ( { a \over p} \right ) = \left ( { b \over p} \right )$. | |
5916 \item $\left ( { 2 \over p} \right )$ equals $1$ if $p \equiv 1$ or $7 \mbox{ (mod }8\mbox{)}$. Otherwise, it equals $-1$. | |
5917 \item $\left ( { a \over p} \right ) \equiv \left ( { p \over a} \right ) \cdot (-1)^{(p-1)(a-1)/4}$. More specifically | |
5918 $\left ( { a \over p} \right ) = \left ( { p \over a} \right )$ if $p \equiv a \equiv 1 \mbox{ (mod }4\mbox{)}$. | |
5919 \end{enumerate} | |
5920 | |
5921 Using these facts if $a = 2^k \cdot a'$ then | |
5922 | |
5923 \begin{eqnarray} | |
5924 \left ( { a \over p } \right ) = \left ( {{2^k} \over p } \right ) \left ( {a' \over p} \right ) \nonumber \\ | |
5925 = \left ( {2 \over p } \right )^k \left ( {a' \over p} \right ) | |
5926 \label{eqn:jacobi} | |
5927 \end{eqnarray} | |
5928 | |
5929 By fact five, | |
5930 | |
5931 \begin{equation} | |
5932 \left ( { a \over p } \right ) = \left ( { p \over a } \right ) \cdot (-1)^{(p-1)(a-1)/4} | |
5933 \end{equation} | |
5934 | |
5935 Subsequently by fact three since $p \equiv (p \mbox{ mod }a) \mbox{ (mod }a\mbox{)}$ then | |
5936 | |
5937 \begin{equation} | |
5938 \left ( { a \over p } \right ) = \left ( { {p \mbox{ mod } a} \over a } \right ) \cdot (-1)^{(p-1)(a-1)/4} | |
5939 \end{equation} | |
5940 | |
5941 By putting both observations into equation \ref{eqn:jacobi} the following simplified equation is formed. | |
5942 | |
5943 \begin{equation} | |
5944 \left ( { a \over p } \right ) = \left ( {2 \over p } \right )^k \left ( {{p\mbox{ mod }a'} \over a'} \right ) \cdot (-1)^{(p-1)(a'-1)/4} | |
5945 \end{equation} | |
5946 | |
5947 The value of $\left ( {{p \mbox{ mod }a'} \over a'} \right )$ can be found by using the same equation recursively. The value of | |
5948 $\left ( {2 \over p } \right )^k$ equals $1$ if $k$ is even otherwise it equals $\left ( {2 \over p } \right )$. Using this approach the | |
5949 factors of $p$ do not have to be known. Furthermore, if $(a, p) = 1$ then the algorithm will terminate when the recursion requests the | |
5950 Jacobi symbol computation of $\left ( {1 \over a'} \right )$ which is simply $1$. | |
5951 | |
5952 \newpage\begin{figure}[!here] | |
5953 \begin{small} | |
5954 \begin{center} | |
5955 \begin{tabular}{l} | |
5956 \hline Algorithm \textbf{mp\_jacobi}. \\ | |
5957 \textbf{Input}. mp\_int $a$ and $p$, $a \ge 0$, $p \ge 3$, $p \equiv 1 \mbox{ (mod }2\mbox{)}$ \\ | |
5958 \textbf{Output}. The Jacobi symbol $c = \left ( {a \over p } \right )$. \\ | |
5959 \hline \\ | |
5960 1. If $a = 0$ then \\ | |
5961 \hspace{3mm}1.1 $c \leftarrow 0$ \\ | |
5962 \hspace{3mm}1.2 Return(\textit{MP\_OKAY}). \\ | |
5963 2. If $a = 1$ then \\ | |
5964 \hspace{3mm}2.1 $c \leftarrow 1$ \\ | |
5965 \hspace{3mm}2.2 Return(\textit{MP\_OKAY}). \\ | |
5966 3. $a' \leftarrow a$ \\ | |
5967 4. $k \leftarrow 0$ \\ | |
5968 5. While $a'.used > 0$ and $a'_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
5969 \hspace{3mm}5.1 $k \leftarrow k + 1$ \\ | |
5970 \hspace{3mm}5.2 $a' \leftarrow \lfloor a' / 2 \rfloor$ \\ | |
5971 6. If $k \equiv 0 \mbox{ (mod }2\mbox{)}$ then \\ | |
5972 \hspace{3mm}6.1 $s \leftarrow 1$ \\ | |
5973 7. else \\ | |
5974 \hspace{3mm}7.1 $r \leftarrow p_0 \mbox{ (mod }8\mbox{)}$ \\ | |
5975 \hspace{3mm}7.2 If $r = 1$ or $r = 7$ then \\ | |
5976 \hspace{6mm}7.2.1 $s \leftarrow 1$ \\ | |
5977 \hspace{3mm}7.3 else \\ | |
5978 \hspace{6mm}7.3.1 $s \leftarrow -1$ \\ | |
5979 8. If $p_0 \equiv a'_0 \equiv 3 \mbox{ (mod }4\mbox{)}$ then \\ | |
5980 \hspace{3mm}8.1 $s \leftarrow -s$ \\ | |
5981 9. If $a' \ne 1$ then \\ | |
5982 \hspace{3mm}9.1 $p' \leftarrow p \mbox{ (mod }a'\mbox{)}$ \\ | |
5983 \hspace{3mm}9.2 $s \leftarrow s \cdot \mbox{mp\_jacobi}(p', a')$ \\ | |
5984 10. $c \leftarrow s$ \\ | |
5985 11. Return(\textit{MP\_OKAY}). \\ | |
5986 \hline | |
5987 \end{tabular} | |
5988 \end{center} | |
5989 \end{small} | |
5990 \caption{Algorithm mp\_jacobi} | |
5991 \end{figure} | |
5992 \textbf{Algorithm mp\_jacobi.} | |
5993 This algorithm computes the Jacobi symbol for an arbitrary positive integer $a$ with respect to an odd integer $p$ greater than three. The algorithm | |
5994 is based on algorithm 2.149 of HAC \cite[pp. 73]{HAC}. | |
5995 | |
5996 Step numbers one and two handle the trivial cases of $a = 0$ and $a = 1$ respectively. Step five determines the number of two factors in the | |
5997 input $a$. If $k$ is even than the term $\left ( { 2 \over p } \right )^k$ must always evaluate to one. If $k$ is odd than the term evaluates to one | |
5998 if $p_0$ is congruent to one or seven modulo eight, otherwise it evaluates to $-1$. After the the $\left ( { 2 \over p } \right )^k$ term is handled | |
5999 the $(-1)^{(p-1)(a'-1)/4}$ is computed and multiplied against the current product $s$. The latter term evaluates to one if both $p$ and $a'$ | |
6000 are congruent to one modulo four, otherwise it evaluates to negative one. | |
6001 | |
6002 By step nine if $a'$ does not equal one a recursion is required. Step 9.1 computes $p' \equiv p \mbox{ (mod }a'\mbox{)}$ and will recurse to compute | |
6003 $\left ( {p' \over a'} \right )$ which is multiplied against the current Jacobi product. | |
6004 | |
6005 EXAM,bn_mp_jacobi.c | |
6006 | |
6007 As a matter of practicality the variable $a'$ as per the pseudo-code is reprensented by the variable $a1$ since the $'$ symbol is not valid for a C | |
6008 variable name character. | |
6009 | |
6010 The two simple cases of $a = 0$ and $a = 1$ are handled at the very beginning to simplify the algorithm. If the input is non-trivial the algorithm | |
6011 has to proceed compute the Jacobi. The variable $s$ is used to hold the current Jacobi product. Note that $s$ is merely a C ``int'' data type since | |
6012 the values it may obtain are merely $-1$, $0$ and $1$. | |
6013 | |
6014 After a local copy of $a$ is made all of the factors of two are divided out and the total stored in $k$. Technically only the least significant | |
6015 bit of $k$ is required, however, it makes the algorithm simpler to follow to perform an addition. In practice an exclusive-or and addition have the same | |
6016 processor requirements and neither is faster than the other. | |
6017 | |
6018 Line @59, if@ through @70, }@ determines the value of $\left ( { 2 \over p } \right )^k$. If the least significant bit of $k$ is zero than | |
6019 $k$ is even and the value is one. Otherwise, the value of $s$ depends on which residue class $p$ belongs to modulo eight. The value of | |
6020 $(-1)^{(p-1)(a'-1)/4}$ is compute and multiplied against $s$ on lines @73, if@ through @75, }@. | |
6021 | |
6022 Finally, if $a1$ does not equal one the algorithm must recurse and compute $\left ( {p' \over a'} \right )$. | |
6023 | |
6024 \textit{-- Comment about default $s$ and such...} | |
6025 | |
6026 \section{Modular Inverse} | |
6027 \label{sec:modinv} | |
6028 The modular inverse of a number actually refers to the modular multiplicative inverse. Essentially for any integer $a$ such that $(a, p) = 1$ there | |
6029 exist another integer $b$ such that $ab \equiv 1 \mbox{ (mod }p\mbox{)}$. The integer $b$ is called the multiplicative inverse of $a$ which is | |
6030 denoted as $b = a^{-1}$. Technically speaking modular inversion is a well defined operation for any finite ring or field not just for rings and | |
6031 fields of integers. However, the former will be the matter of discussion. | |
6032 | |
6033 The simplest approach is to compute the algebraic inverse of the input. That is to compute $b \equiv a^{\Phi(p) - 1}$. If $\Phi(p)$ is the | |
6034 order of the multiplicative subgroup modulo $p$ then $b$ must be the multiplicative inverse of $a$. The proof of which is trivial. | |
6035 | |
6036 \begin{equation} | |
6037 ab \equiv a \left (a^{\Phi(p) - 1} \right ) \equiv a^{\Phi(p)} \equiv a^0 \equiv 1 \mbox{ (mod }p\mbox{)} | |
6038 \end{equation} | |
6039 | |
6040 However, as simple as this approach may be it has two serious flaws. It requires that the value of $\Phi(p)$ be known which if $p$ is composite | |
6041 requires all of the prime factors. This approach also is very slow as the size of $p$ grows. | |
6042 | |
6043 A simpler approach is based on the observation that solving for the multiplicative inverse is equivalent to solving the linear | |
6044 Diophantine\footnote{See LeVeque \cite[pp. 40-43]{LeVeque} for more information.} equation. | |
6045 | |
6046 \begin{equation} | |
6047 ab + pq = 1 | |
6048 \end{equation} | |
6049 | |
6050 Where $a$, $b$, $p$ and $q$ are all integers. If such a pair of integers $ \left < b, q \right >$ exist than $b$ is the multiplicative inverse of | |
6051 $a$ modulo $p$. The extended Euclidean algorithm (Knuth \cite[pp. 342]{TAOCPV2}) can be used to solve such equations provided $(a, p) = 1$. | |
6052 However, instead of using that algorithm directly a variant known as the binary Extended Euclidean algorithm will be used in its place. The | |
6053 binary approach is very similar to the binary greatest common divisor algorithm except it will produce a full solution to the Diophantine | |
6054 equation. | |
6055 | |
6056 \subsection{General Case} | |
6057 \newpage\begin{figure}[!here] | |
6058 \begin{small} | |
6059 \begin{center} | |
6060 \begin{tabular}{l} | |
6061 \hline Algorithm \textbf{mp\_invmod}. \\ | |
6062 \textbf{Input}. mp\_int $a$ and $b$, $(a, b) = 1$, $p \ge 2$, $0 < a < p$. \\ | |
6063 \textbf{Output}. The modular inverse $c \equiv a^{-1} \mbox{ (mod }b\mbox{)}$. \\ | |
6064 \hline \\ | |
6065 1. If $b \le 0$ then return(\textit{MP\_VAL}). \\ | |
6066 2. If $b_0 \equiv 1 \mbox{ (mod }2\mbox{)}$ then use algorithm fast\_mp\_invmod. \\ | |
6067 3. $x \leftarrow \vert a \vert, y \leftarrow b$ \\ | |
6068 4. If $x_0 \equiv y_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ then return(\textit{MP\_VAL}). \\ | |
6069 5. $B \leftarrow 0, C \leftarrow 0, A \leftarrow 1, D \leftarrow 1$ \\ | |
6070 6. While $u.used > 0$ and $u_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
6071 \hspace{3mm}6.1 $u \leftarrow \lfloor u / 2 \rfloor$ \\ | |
6072 \hspace{3mm}6.2 If ($A.used > 0$ and $A_0 \equiv 1 \mbox{ (mod }2\mbox{)}$) or ($B.used > 0$ and $B_0 \equiv 1 \mbox{ (mod }2\mbox{)}$) then \\ | |
6073 \hspace{6mm}6.2.1 $A \leftarrow A + y$ \\ | |
6074 \hspace{6mm}6.2.2 $B \leftarrow B - x$ \\ | |
6075 \hspace{3mm}6.3 $A \leftarrow \lfloor A / 2 \rfloor$ \\ | |
6076 \hspace{3mm}6.4 $B \leftarrow \lfloor B / 2 \rfloor$ \\ | |
6077 7. While $v.used > 0$ and $v_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
6078 \hspace{3mm}7.1 $v \leftarrow \lfloor v / 2 \rfloor$ \\ | |
6079 \hspace{3mm}7.2 If ($C.used > 0$ and $C_0 \equiv 1 \mbox{ (mod }2\mbox{)}$) or ($D.used > 0$ and $D_0 \equiv 1 \mbox{ (mod }2\mbox{)}$) then \\ | |
6080 \hspace{6mm}7.2.1 $C \leftarrow C + y$ \\ | |
6081 \hspace{6mm}7.2.2 $D \leftarrow D - x$ \\ | |
6082 \hspace{3mm}7.3 $C \leftarrow \lfloor C / 2 \rfloor$ \\ | |
6083 \hspace{3mm}7.4 $D \leftarrow \lfloor D / 2 \rfloor$ \\ | |
6084 8. If $u \ge v$ then \\ | |
6085 \hspace{3mm}8.1 $u \leftarrow u - v$ \\ | |
6086 \hspace{3mm}8.2 $A \leftarrow A - C$ \\ | |
6087 \hspace{3mm}8.3 $B \leftarrow B - D$ \\ | |
6088 9. else \\ | |
6089 \hspace{3mm}9.1 $v \leftarrow v - u$ \\ | |
6090 \hspace{3mm}9.2 $C \leftarrow C - A$ \\ | |
6091 \hspace{3mm}9.3 $D \leftarrow D - B$ \\ | |
6092 10. If $u \ne 0$ goto step 6. \\ | |
6093 11. If $v \ne 1$ return(\textit{MP\_VAL}). \\ | |
6094 12. While $C \le 0$ do \\ | |
6095 \hspace{3mm}12.1 $C \leftarrow C + b$ \\ | |
6096 13. While $C \ge b$ do \\ | |
6097 \hspace{3mm}13.1 $C \leftarrow C - b$ \\ | |
6098 14. $c \leftarrow C$ \\ | |
6099 15. Return(\textit{MP\_OKAY}). \\ | |
6100 \hline | |
6101 \end{tabular} | |
6102 \end{center} | |
6103 \end{small} | |
6104 \end{figure} | |
6105 \textbf{Algorithm mp\_invmod.} | |
6106 This algorithm computes the modular multiplicative inverse of an integer $a$ modulo an integer $b$. This algorithm is a variation of the | |
6107 extended binary Euclidean algorithm from HAC \cite[pp. 608]{HAC}. It has been modified to only compute the modular inverse and not a complete | |
6108 Diophantine solution. | |
6109 | |
6110 If $b \le 0$ than the modulus is invalid and MP\_VAL is returned. Similarly if both $a$ and $b$ are even then there cannot be a multiplicative | |
6111 inverse for $a$ and the error is reported. | |
6112 | |
6113 The astute reader will observe that steps seven through nine are very similar to the binary greatest common divisor algorithm mp\_gcd. In this case | |
6114 the other variables to the Diophantine equation are solved. The algorithm terminates when $u = 0$ in which case the solution is | |
6115 | |
6116 \begin{equation} | |
6117 Ca + Db = v | |
6118 \end{equation} | |
6119 | |
6120 If $v$, the greatest common divisor of $a$ and $b$ is not equal to one then the algorithm will report an error as no inverse exists. Otherwise, $C$ | |
6121 is the modular inverse of $a$. The actual value of $C$ is congruent to, but not necessarily equal to, the ideal modular inverse which should lie | |
6122 within $1 \le a^{-1} < b$. Step numbers twelve and thirteen adjust the inverse until it is in range. If the original input $a$ is within $0 < a < p$ | |
6123 then only a couple of additions or subtractions will be required to adjust the inverse. | |
6124 | |
6125 EXAM,bn_mp_invmod.c | |
6126 | |
6127 \subsubsection{Odd Moduli} | |
6128 | |
6129 When the modulus $b$ is odd the variables $A$ and $C$ are fixed and are not required to compute the inverse. In particular by attempting to solve | |
6130 the Diophantine $Cb + Da = 1$ only $B$ and $D$ are required to find the inverse of $a$. | |
6131 | |
6132 The algorithm fast\_mp\_invmod is a direct adaptation of algorithm mp\_invmod with all all steps involving either $A$ or $C$ removed. This | |
6133 optimization will halve the time required to compute the modular inverse. | |
6134 | |
6135 \section{Primality Tests} | |
6136 | |
6137 A non-zero integer $a$ is said to be prime if it is not divisible by any other integer excluding one and itself. For example, $a = 7$ is prime | |
6138 since the integers $2 \ldots 6$ do not evenly divide $a$. By contrast, $a = 6$ is not prime since $a = 6 = 2 \cdot 3$. | |
6139 | |
6140 Prime numbers arise in cryptography considerably as they allow finite fields to be formed. The ability to determine whether an integer is prime or | |
6141 not quickly has been a viable subject in cryptography and number theory for considerable time. The algorithms that will be presented are all | |
6142 probablistic algorithms in that when they report an integer is composite it must be composite. However, when the algorithms report an integer is | |
6143 prime the algorithm may be incorrect. | |
6144 | |
6145 As will be discussed it is possible to limit the probability of error so well that for practical purposes the probablity of error might as | |
6146 well be zero. For the purposes of these discussions let $n$ represent the candidate integer of which the primality is in question. | |
6147 | |
6148 \subsection{Trial Division} | |
6149 | |
6150 Trial division means to attempt to evenly divide a candidate integer by small prime integers. If the candidate can be evenly divided it obviously | |
6151 cannot be prime. By dividing by all primes $1 < p \le \sqrt{n}$ this test can actually prove whether an integer is prime. However, such a test | |
6152 would require a prohibitive amount of time as $n$ grows. | |
6153 | |
6154 Instead of dividing by every prime, a smaller, more mangeable set of primes may be used instead. By performing trial division with only a subset | |
6155 of the primes less than $\sqrt{n} + 1$ the algorithm cannot prove if a candidate is prime. However, often it can prove a candidate is not prime. | |
6156 | |
6157 The benefit of this test is that trial division by small values is fairly efficient. Specially compared to the other algorithms that will be | |
6158 discussed shortly. The probability that this approach correctly identifies a composite candidate when tested with all primes upto $q$ is given by | |
6159 $1 - {1.12 \over ln(q)}$. The graph (\ref{pic:primality}, will be added later) demonstrates the probability of success for the range | |
6160 $3 \le q \le 100$. | |
6161 | |
6162 At approximately $q = 30$ the gain of performing further tests diminishes fairly quickly. At $q = 90$ further testing is generally not going to | |
6163 be of any practical use. In the case of LibTomMath the default limit $q = 256$ was chosen since it is not too high and will eliminate | |
6164 approximately $80\%$ of all candidate integers. The constant \textbf{PRIME\_SIZE} is equal to the number of primes in the test base. The | |
6165 array \_\_prime\_tab is an array of the first \textbf{PRIME\_SIZE} prime numbers. | |
6166 | |
6167 \begin{figure}[!here] | |
6168 \begin{small} | |
6169 \begin{center} | |
6170 \begin{tabular}{l} | |
6171 \hline Algorithm \textbf{mp\_prime\_is\_divisible}. \\ | |
6172 \textbf{Input}. mp\_int $a$ \\ | |
6173 \textbf{Output}. $c = 1$ if $n$ is divisible by a small prime, otherwise $c = 0$. \\ | |
6174 \hline \\ | |
6175 1. for $ix$ from $0$ to $PRIME\_SIZE$ do \\ | |
6176 \hspace{3mm}1.1 $d \leftarrow n \mbox{ (mod }\_\_prime\_tab_{ix}\mbox{)}$ \\ | |
6177 \hspace{3mm}1.2 If $d = 0$ then \\ | |
6178 \hspace{6mm}1.2.1 $c \leftarrow 1$ \\ | |
6179 \hspace{6mm}1.2.2 Return(\textit{MP\_OKAY}). \\ | |
6180 2. $c \leftarrow 0$ \\ | |
6181 3. Return(\textit{MP\_OKAY}). \\ | |
6182 \hline | |
6183 \end{tabular} | |
6184 \end{center} | |
6185 \end{small} | |
6186 \caption{Algorithm mp\_prime\_is\_divisible} | |
6187 \end{figure} | |
6188 \textbf{Algorithm mp\_prime\_is\_divisible.} | |
6189 This algorithm attempts to determine if a candidate integer $n$ is composite by performing trial divisions. | |
6190 | |
6191 EXAM,bn_mp_prime_is_divisible.c | |
6192 | |
6193 The algorithm defaults to a return of $0$ in case an error occurs. The values in the prime table are all specified to be in the range of a | |
6194 mp\_digit. The table \_\_prime\_tab is defined in the following file. | |
6195 | |
6196 EXAM,bn_prime_tab.c | |
6197 | |
6198 Note that there are two possible tables. When an mp\_digit is 7-bits long only the primes upto $127$ may be included, otherwise the primes | |
6199 upto $1619$ are used. Note that the value of \textbf{PRIME\_SIZE} is a constant dependent on the size of a mp\_digit. | |
6200 | |
6201 \subsection{The Fermat Test} | |
6202 The Fermat test is probably one the oldest tests to have a non-trivial probability of success. It is based on the fact that if $n$ is in | |
6203 fact prime then $a^{n} \equiv a \mbox{ (mod }n\mbox{)}$ for all $0 < a < n$. The reason being that if $n$ is prime than the order of | |
6204 the multiplicative sub group is $n - 1$. Any base $a$ must have an order which divides $n - 1$ and as such $a^n$ is equivalent to | |
6205 $a^1 = a$. | |
6206 | |
6207 If $n$ is composite then any given base $a$ does not have to have a period which divides $n - 1$. In which case | |
6208 it is possible that $a^n \nequiv a \mbox{ (mod }n\mbox{)}$. However, this test is not absolute as it is possible that the order | |
6209 of a base will divide $n - 1$ which would then be reported as prime. Such a base yields what is known as a Fermat pseudo-prime. Several | |
6210 integers known as Carmichael numbers will be a pseudo-prime to all valid bases. Fortunately such numbers are extremely rare as $n$ grows | |
6211 in size. | |
6212 | |
6213 \begin{figure}[!here] | |
6214 \begin{small} | |
6215 \begin{center} | |
6216 \begin{tabular}{l} | |
6217 \hline Algorithm \textbf{mp\_prime\_fermat}. \\ | |
6218 \textbf{Input}. mp\_int $a$ and $b$, $a \ge 2$, $0 < b < a$. \\ | |
6219 \textbf{Output}. $c = 1$ if $b^a \equiv b \mbox{ (mod }a\mbox{)}$, otherwise $c = 0$. \\ | |
6220 \hline \\ | |
6221 1. $t \leftarrow b^a \mbox{ (mod }a\mbox{)}$ \\ | |
6222 2. If $t = b$ then \\ | |
6223 \hspace{3mm}2.1 $c = 1$ \\ | |
6224 3. else \\ | |
6225 \hspace{3mm}3.1 $c = 0$ \\ | |
6226 4. Return(\textit{MP\_OKAY}). \\ | |
6227 \hline | |
6228 \end{tabular} | |
6229 \end{center} | |
6230 \end{small} | |
6231 \caption{Algorithm mp\_prime\_fermat} | |
6232 \end{figure} | |
6233 \textbf{Algorithm mp\_prime\_fermat.} | |
6234 This algorithm determines whether an mp\_int $a$ is a Fermat prime to the base $b$ or not. It uses a single modular exponentiation to | |
6235 determine the result. | |
6236 | |
6237 EXAM,bn_mp_prime_fermat.c | |
6238 | |
6239 \subsection{The Miller-Rabin Test} | |
6240 The Miller-Rabin (citation) test is another primality test which has tighter error bounds than the Fermat test specifically with sequentially chosen | |
6241 candidate integers. The algorithm is based on the observation that if $n - 1 = 2^kr$ and if $b^r \nequiv \pm 1$ then after upto $k - 1$ squarings the | |
6242 value must be equal to $-1$. The squarings are stopped as soon as $-1$ is observed. If the value of $1$ is observed first it means that | |
6243 some value not congruent to $\pm 1$ when squared equals one which cannot occur if $n$ is prime. | |
6244 | |
6245 \begin{figure}[!here] | |
6246 \begin{small} | |
6247 \begin{center} | |
6248 \begin{tabular}{l} | |
6249 \hline Algorithm \textbf{mp\_prime\_miller\_rabin}. \\ | |
6250 \textbf{Input}. mp\_int $a$ and $b$, $a \ge 2$, $0 < b < a$. \\ | |
6251 \textbf{Output}. $c = 1$ if $a$ is a Miller-Rabin prime to the base $a$, otherwise $c = 0$. \\ | |
6252 \hline | |
6253 1. $a' \leftarrow a - 1$ \\ | |
6254 2. $r \leftarrow n1$ \\ | |
6255 3. $c \leftarrow 0, s \leftarrow 0$ \\ | |
6256 4. While $r.used > 0$ and $r_0 \equiv 0 \mbox{ (mod }2\mbox{)}$ \\ | |
6257 \hspace{3mm}4.1 $s \leftarrow s + 1$ \\ | |
6258 \hspace{3mm}4.2 $r \leftarrow \lfloor r / 2 \rfloor$ \\ | |
6259 5. $y \leftarrow b^r \mbox{ (mod }a\mbox{)}$ \\ | |
6260 6. If $y \nequiv \pm 1$ then \\ | |
6261 \hspace{3mm}6.1 $j \leftarrow 1$ \\ | |
6262 \hspace{3mm}6.2 While $j \le (s - 1)$ and $y \nequiv a'$ \\ | |
6263 \hspace{6mm}6.2.1 $y \leftarrow y^2 \mbox{ (mod }a\mbox{)}$ \\ | |
6264 \hspace{6mm}6.2.2 If $y = 1$ then goto step 8. \\ | |
6265 \hspace{6mm}6.2.3 $j \leftarrow j + 1$ \\ | |
6266 \hspace{3mm}6.3 If $y \nequiv a'$ goto step 8. \\ | |
6267 7. $c \leftarrow 1$\\ | |
6268 8. Return(\textit{MP\_OKAY}). \\ | |
6269 \hline | |
6270 \end{tabular} | |
6271 \end{center} | |
6272 \end{small} | |
6273 \caption{Algorithm mp\_prime\_miller\_rabin} | |
6274 \end{figure} | |
6275 \textbf{Algorithm mp\_prime\_miller\_rabin.} | |
6276 This algorithm performs one trial round of the Miller-Rabin algorithm to the base $b$. It will set $c = 1$ if the algorithm cannot determine | |
6277 if $b$ is composite or $c = 0$ if $b$ is provably composite. The values of $s$ and $r$ are computed such that $a' = a - 1 = 2^sr$. | |
6278 | |
6279 If the value $y \equiv b^r$ is congruent to $\pm 1$ then the algorithm cannot prove if $a$ is composite or not. Otherwise, the algorithm will | |
6280 square $y$ upto $s - 1$ times stopping only when $y \equiv -1$. If $y^2 \equiv 1$ and $y \nequiv \pm 1$ then the algorithm can report that $a$ | |
6281 is provably composite. If the algorithm performs $s - 1$ squarings and $y \nequiv -1$ then $a$ is provably composite. If $a$ is not provably | |
6282 composite then it is \textit{probably} prime. | |
6283 | |
6284 EXAM,bn_mp_prime_miller_rabin.c | |
6285 | |
6286 | |
6287 | |
6288 | |
6289 \backmatter | |
6290 \appendix | |
6291 \begin{thebibliography}{ABCDEF} | |
6292 \bibitem[1]{TAOCPV2} | |
6293 Donald Knuth, \textit{The Art of Computer Programming}, Third Edition, Volume Two, Seminumerical Algorithms, Addison-Wesley, 1998 | |
6294 | |
6295 \bibitem[2]{HAC} | |
6296 A. Menezes, P. van Oorschot, S. Vanstone, \textit{Handbook of Applied Cryptography}, CRC Press, 1996 | |
6297 | |
6298 \bibitem[3]{ROSE} | |
6299 Michael Rosing, \textit{Implementing Elliptic Curve Cryptography}, Manning Publications, 1999 | |
6300 | |
6301 \bibitem[4]{COMBA} | |
6302 Paul G. Comba, \textit{Exponentiation Cryptosystems on the IBM PC}. IBM Systems Journal 29(4): 526-538 (1990) | |
6303 | |
6304 \bibitem[5]{KARA} | |
6305 A. Karatsuba, Doklay Akad. Nauk SSSR 145 (1962), pp.293-294 | |
6306 | |
6307 \bibitem[6]{KARAP} | |
6308 Andre Weimerskirch and Christof Paar, \textit{Generalizations of the Karatsuba Algorithm for Polynomial Multiplication}, Submitted to Design, Codes and Cryptography, March 2002 | |
6309 | |
6310 \bibitem[7]{BARRETT} | |
6311 Paul Barrett, \textit{Implementing the Rivest Shamir and Adleman Public Key Encryption Algorithm on a Standard Digital Signal Processor}, Advances in Cryptology, Crypto '86, Springer-Verlag. | |
6312 | |
6313 \bibitem[8]{MONT} | |
6314 P.L.Montgomery. \textit{Modular multiplication without trial division}. Mathematics of Computation, 44(170):519-521, April 1985. | |
6315 | |
6316 \bibitem[9]{DRMET} | |
6317 Chae Hoon Lim and Pil Joong Lee, \textit{Generating Efficient Primes for Discrete Log Cryptosystems}, POSTECH Information Research Laboratories | |
6318 | |
6319 \bibitem[10]{MMB} | |
6320 J. Daemen and R. Govaerts and J. Vandewalle, \textit{Block ciphers based on Modular Arithmetic}, State and {P}rogress in the {R}esearch of {C}ryptography, 1993, pp. 80-89 | |
6321 | |
6322 \bibitem[11]{RSAREF} | |
6323 R.L. Rivest, A. Shamir, L. Adleman, \textit{A Method for Obtaining Digital Signatures and Public-Key Cryptosystems} | |
6324 | |
6325 \bibitem[12]{DHREF} | |
6326 Whitfield Diffie, Martin E. Hellman, \textit{New Directions in Cryptography}, IEEE Transactions on Information Theory, 1976 | |
6327 | |
6328 \bibitem[13]{IEEE} | |
6329 IEEE Standard for Binary Floating-Point Arithmetic (ANSI/IEEE Std 754-1985) | |
6330 | |
6331 \bibitem[14]{GMP} | |
6332 GNU Multiple Precision (GMP), \url{http://www.swox.com/gmp/} | |
6333 | |
6334 \bibitem[15]{MPI} | |
6335 Multiple Precision Integer Library (MPI), Michael Fromberger, \url{http://thayer.dartmouth.edu/~sting/mpi/} | |
6336 | |
6337 \bibitem[16]{OPENSSL} | |
6338 OpenSSL Cryptographic Toolkit, \url{http://openssl.org} | |
6339 | |
6340 \bibitem[17]{LIP} | |
6341 Large Integer Package, \url{http://home.hetnet.nl/~ecstr/LIP.zip} | |
6342 | |
6343 \bibitem[18]{ISOC} | |
6344 JTC1/SC22/WG14, ISO/IEC 9899:1999, ``A draft rationale for the C99 standard.'' | |
6345 | |
6346 \bibitem[19]{JAVA} | |
6347 The Sun Java Website, \url{http://java.sun.com/} | |
6348 | |
6349 \end{thebibliography} | |
6350 | |
6351 \input{tommath.ind} | |
6352 | |
6353 \end{document} |