19

1 \documentclass[b5paper]{book} 

2 \usepackage{hyperref} 

3 \usepackage{makeidx} 

4 \usepackage{amssymb} 

5 \usepackage{color} 

6 \usepackage{alltt} 

7 \usepackage{graphicx} 

8 \usepackage{layout} 

9 \def\union{\cup} 

10 \def\intersect{\cap} 

11 \def\getsrandom{\stackrel{\rm R}{\gets}} 

12 \def\cross{\times} 

13 \def\cat{\hspace{0.5em} \ \hspace{0.5em}} 

14 \def\catn{$\$} 

15 \def\divides{\hspace{0.3em}  \hspace{0.3em}} 

16 \def\nequiv{\not\equiv} 

17 \def\approx{\raisebox{0.2ex}{\mbox{\small $\sim$}}} 

18 \def\lcm{{\rm lcm}} 

19 \def\gcd{{\rm gcd}} 

20 \def\log{{\rm log}} 

21 \def\ord{{\rm ord}} 

22 \def\abs{{\mathit abs}} 

23 \def\rep{{\mathit rep}} 

24 \def\mod{{\mathit\ mod\ }} 

25 \renewcommand{\pmod}[1]{\ ({\rm mod\ }{#1})} 

26 \newcommand{\floor}[1]{\left\lfloor{#1}\right\rfloor} 

27 \newcommand{\ceil}[1]{\left\lceil{#1}\right\rceil} 

28 \def\Or{{\rm\ or\ }} 

29 \def\And{{\rm\ and\ }} 

30 \def\iff{\hspace{1em}\Longleftrightarrow\hspace{1em}} 

31 \def\implies{\Rightarrow} 

32 \def\undefined{{\rm ``undefined"}} 

33 \def\Proof{\vspace{1ex}\noindent {\bf Proof:}\hspace{1em}} 

34 \let\oldphi\phi 

35 \def\phi{\varphi} 

36 \def\Pr{{\rm Pr}} 

37 \newcommand{\str}[1]{{\mathbf{#1}}} 

38 \def\F{{\mathbb F}} 

39 \def\N{{\mathbb N}} 

40 \def\Z{{\mathbb Z}} 

41 \def\R{{\mathbb R}} 

42 \def\C{{\mathbb C}} 

43 \def\Q{{\mathbb Q}} 

44 \definecolor{DGray}{gray}{0.5} 

45 \newcommand{\emailaddr}[1]{\mbox{$<${#1}$>$}} 

46 \def\twiddle{\raisebox{0.3ex}{\mbox{\tiny $\sim$}}} 

47 \def\gap{\vspace{0.5ex}} 

48 \makeindex 

49 \begin{document} 

50 \frontmatter 

51 \pagestyle{empty} 

52 \title{Implementing Multiple Precision Arithmetic \\ ~ \\ Draft Edition } 

53 \author{\mbox{ 

54 %\begin{small} 

55 \begin{tabular}{c} 

56 Tom St Denis \\ 

57 Algonquin College \\ 

58 \\ 

59 Mads Rasmussen \\ 

60 Open Communications Security \\ 

61 \\ 

62 Greg Rose \\ 

63 QUALCOMM Australia \\ 

64 \end{tabular} 

65 %\end{small} 

66 } 

67 } 

68 \maketitle 

69 This text has been placed in the public domain. This text corresponds to the v0.30 release of the 

70 LibTomMath project. 

71 

72 \begin{alltt} 

73 Tom St Denis 

74 111 Banning Rd 

75 Ottawa, Ontario 

76 K2L 1C3 

77 Canada 

78 

79 Phone: 16138363160 

80 Email: [email protected] 

81 \end{alltt} 

82 

83 This text is formatted to the international B5 paper size of 176mm wide by 250mm tall using the \LaTeX{} 

84 {\em book} macro package and the Perl {\em booker} package. 

85 

86 \tableofcontents 

87 \listoffigures 

88 \chapter*{Prefaces to the Draft Edition} 

89 I started this text in April 2003 to complement my LibTomMath library. That is, explain how to implement the functions 

90 contained in LibTomMath. The goal is to have a textbook that any Computer Science student can use when implementing their 

91 own multiple precision arithmetic. The plan I wanted to follow was flesh out all the 

92 ideas and concepts I had floating around in my head and then work on it afterwards refining a little bit at a time. Chance 

93 would have it that I ended up with my summer off from Algonquin College and I was given four months solid to work on the 

94 text. 

95 

96 Choosing to not waste any time I dove right into the project even before my spring semester was finished. I wrote a bit 

97 off and on at first. The moment my exams were finished I jumped into long 12 to 16 hour days. The result after only 

98 a couple of months was a ten chapter, three hundred page draft that I quickly had distributed to anyone who wanted 

99 to read it. I had JeanLuc Cooke print copies for me and I brought them to Crypto'03 in Santa Barbara. So far I have 

100 managed to grab a certain level of attention having people from around the world ask me for copies of the text was certain 

101 rewarding. 

102 

103 Now we are past December 2003. By this time I had pictured that I would have at least finished my second draft of the text. 

104 Currently I am far off from this goal. I've done partial rewrites of chapters one, two and three but they are not even 

105 finished yet. I haven't given up on the project, only had some setbacks. First O'Reilly declined to publish the text then 

106 AddisonWesley and Greg is tried another which I don't know the name of. However, at this point I want to focus my energy 

107 onto finishing the book not securing a contract. 

108 

109 So why am I writing this text? It seems like a lot of work right? Most certainly it is a lot of work writing a textbook. 

110 Even the simplest introductory material has to be lined with references and figures. A lot of the text has to be rewritten 

111 from point form to prose form to ensure an easier read. Why am I doing all this work for free then? Simple. My philosophy 

112 is quite simply ``Open Source. Open Academia. Open Minds'' which means that to achieve a goal of open minds, that is, 

113 people willing to accept new ideas and explore the unknown you have to make available material they can access freely 

114 without hinderance. 

115 

116 I've been writing free software since I was about sixteen but only recently have I hit upon software that people have come 

117 to depend upon. I started LibTomCrypt in December 2001 and now several major companies use it as integral portions of their 

118 software. Several educational institutions use it as a matter of course and many freelance developers use it as 

119 part of their projects. To further my contributions I started the LibTomMath project in December 2002 aimed at providing 

120 multiple precision arithmetic routines that students could learn from. That is write routines that are not only easy 

121 to understand and follow but provide quite impressive performance considering they are all in standard portable ISO C. 

122 

123 The second leg of my philosophy is ``Open Academia'' which is where this textbook comes in. In the end, when all is 

124 said and done the text will be useable by educational institutions as a reference on multiple precision arithmetic. 

125 

126 At this time I feel I should share a little information about myself. The most common question I was asked at 

127 Crypto'03, perhaps just out of professional courtesy, was which school I either taught at or attended. The unfortunate 

128 truth is that I neither teach at or attend a school of academic reputation. I'm currently at Algonquin College which 

129 is what I'd like to call ``somewhat academic but mostly vocational'' college. In otherwords, job training. 

130 

131 I'm a 21 year old computer science student mostly selftaught in the areas I am aware of (which includes a halfdozen 

132 computer science fields, a few fields of mathematics and some English). I look forward to teaching someday but I am 

133 still far off from that goal. 

134 

135 Now it would be improper for me to not introduce the rest of the texts coauthors. While they are only contributing 

136 corrections and editorial feedback their support has been tremendously helpful in presenting the concepts laid out 

137 in the text so far. Greg has always been there for me. He has tracked my LibTom projects since their inception and even 

138 sent cheques to help pay tuition from time to time. His background has provided a wonderful source to bounce ideas off 

139 of and improve the quality of my writing. Mads is another fellow who has just ``been there''. I don't even recall what 

140 his interest in the LibTom projects is but I'm definitely glad he has been around. His ability to catch logical errors 

141 in my written English have saved me on several occasions to say the least. 

142 

143 What to expect next? Well this is still a rough draft. I've only had the chance to update a few chapters. However, I've 

144 been getting the feeling that people are starting to use my text and I owe them some updated material. My current tenative 

145 plan is to edit one chapter every two weeks starting January 4th. It seems insane but my lower course load at college 

146 should provide ample time. By Crypto'04 I plan to have a 2nd draft of the text polished and ready to hand out to as many 

147 people who will take it. 

148 

149 \begin{flushright} Tom St Denis \end{flushright} 

150 

151 \newpage 

152 I found the opportunity to work with Tom appealing for several reasons, not only could I broaden my own horizons, but also 

153 contribute to educate others facing the problem of having to handle big number mathematical calculations. 

154 

155 This book is Tom's child and he has been caring and fostering the project ever since the beginning with a clear mind of 

156 how he wanted the project to turn out. I have helped by proofreading the text and we have had several discussions about 

157 the layout and language used. 

158 

159 I hold a masters degree in cryptography from the University of Southern Denmark and have always been interested in the 

160 practical aspects of cryptography. 

161 

162 Having worked in the security consultancy business for several years in S\~{a}o Paulo, Brazil, I have been in touch with a 

163 great deal of work in which multiple precision mathematics was needed. Understanding the possibilities for speeding up 

164 multiple precision calculations is often very important since we deal with outdated machine architecture where modular 

165 reductions, for example, become painfully slow. 

166 

167 This text is for people who stop and wonder when first examining algorithms such as RSA for the first time and asks 

168 themselves, ``You tell me this is only secure for large numbers, fine; but how do you implement these numbers?'' 

169 

170 \begin{flushright} 

171 Mads Rasmussen 

172 

173 S\~{a}o Paulo  SP 

174 

175 Brazil 

176 \end{flushright} 

177 

178 \newpage 

179 It's all because I broke my leg. That just happened to be at about the same time that Tom asked for someone to review the section of the book about 

180 Karatsuba multiplication. I was laid up, alone and immobile, and thought ``Why not?'' I vaguely knew what Karatsuba multiplication was, but not 

181 really, so I thought I could help, learn, and stop myself from watching daytime cable TV, all at once. 

182 

183 At the time of writing this, I've still not met Tom or Mads in meatspace. I've been following Tom's progress since his first splash on the 

184 sci.crypt Usenet news group. I watched him go from a clueless newbie, to the cryptographic equivalent of a reformed smoker, to a real 

185 contributor to the field, over a period of about two years. I've been impressed with his obvious intelligence, and astounded by his productivity. 

186 Of course, he's young enough to be my own child, so he doesn't have my problems with staying awake. 

187 

188 When I reviewed that single section of the book, in its very earliest form, I was very pleasantly surprised. So I decided to collaborate more fully, 

189 and at least review all of it, and perhaps write some bits too. There's still a long way to go with it, and I have watched a number of close 

190 friends go through the mill of publication, so I think that the way to go is longer than Tom thinks it is. Nevertheless, it's a good effort, 

191 and I'm pleased to be involved with it. 

192 

193 \begin{flushright} 

194 Greg Rose, Sydney, Australia, June 2003. 

195 \end{flushright} 

196 

197 \mainmatter 

198 \pagestyle{headings} 

199 \chapter{Introduction} 

200 \section{Multiple Precision Arithmetic} 

201 

202 \subsection{What is Multiple Precision Arithmetic?} 

203 When we think of longhand arithmetic such as addition or multiplication we rarely consider the fact that we instinctively 

204 raise or lower the precision of the numbers we are dealing with. For example, in decimal we almost immediate can 

205 reason that $7$ times $6$ is $42$. However, $42$ has two digits of precision as opposed to one digit we started with. 

206 Further multiplications of say $3$ result in a larger precision result $126$. In these few examples we have multiple 

207 precisions for the numbers we are working with. Despite the various levels of precision a single subset\footnote{With the occasional optimization.} 

208 of algorithms can be designed to accomodate them. 

209 

210 By way of comparison a fixed or single precision operation would lose precision on various operations. For example, in 

211 the decimal system with fixed precision $6 \cdot 7 = 2$. 

212 

213 Essentially at the heart of computer based multiple precision arithmetic are the same longhand algorithms taught in 

214 schools to manually add, subtract, multiply and divide. 

215 

216 \subsection{The Need for Multiple Precision Arithmetic} 

217 The most prevalent need for multiple precision arithmetic, often referred to as ``bignum'' math, is within the implementation 

218 of publickey cryptography algorithms. Algorithms such as RSA \cite{RSAREF} and DiffieHellman \cite{DHREF} require 

219 integers of significant magnitude to resist known cryptanalytic attacks. For example, at the time of this writing a 

220 typical RSA modulus would be at least greater than $10^{309}$. However, modern programming languages such as ISO C \cite{ISOC} and 

221 Java \cite{JAVA} only provide instrinsic support for integers which are relatively small and single precision. 

222 

223 \begin{figure}[!here] 

224 \begin{center} 

225 \begin{tabular}{rc} 

226 \hline \textbf{Data Type} & \textbf{Range} \\ 

227 \hline char & $128 \ldots 127$ \\ 

228 \hline short & $32768 \ldots 32767$ \\ 

229 \hline long & $2147483648 \ldots 2147483647$ \\ 

230 \hline long long & $9223372036854775808 \ldots 9223372036854775807$ \\ 

231 \hline 

232 \end{tabular} 

233 \end{center} 

234 \caption{Typical Data Types for the C Programming Language} 

235 \label{fig:ISOC} 

236 \end{figure} 

237 

238 The largest data type guaranteed to be provided by the ISO C programming 

239 language\footnote{As per the ISO C standard. However, each compiler vendor is allowed to augment the precision as they 

240 see fit.} can only represent values up to $10^{19}$ as shown in figure \ref{fig:ISOC}. On its own the C language is 

241 insufficient to accomodate the magnitude required for the problem at hand. An RSA modulus of magnitude $10^{19}$ could be 

242 trivially factored\footnote{A PollardRho factoring would take only $2^{16}$ time.} on the average desktop computer, 

243 rendering any protocol based on the algorithm insecure. Multiple precision algorithms solve this very problem by 

244 extending the range of representable integers while using single precision data types. 

245 

246 Most advancements in fast multiple precision arithmetic stem from the need for faster and more efficient cryptographic 

247 primitives. Faster modular reduction and exponentiation algorithms such as Barrett's algorithm, which have appeared in 

248 various cryptographic journals, can render algorithms such as RSA and DiffieHellman more efficient. In fact, several 

249 major companies such as RSA Security, Certicom and Entrust have built entire product lines on the implementation and 

250 deployment of efficient algorithms. 

251 

252 However, cryptography is not the only field of study that can benefit from fast multiple precision integer routines. 

253 Another auxiliary use of multiple precision integers is high precision floating point data types. 

254 The basic IEEE \cite{IEEE} standard floating point type is made up of an integer mantissa $q$, an exponent $e$ and a sign bit $s$. 

255 Numbers are given in the form $n = q \cdot b^e \cdot 1^s$ where $b = 2$ is the most common base for IEEE. Since IEEE 

256 floating point is meant to be implemented in hardware the precision of the mantissa is often fairly small 

257 (\textit{23, 48 and 64 bits}). The mantissa is merely an integer and a multiple precision integer could be used to create 

258 a mantissa of much larger precision than hardware alone can efficiently support. This approach could be useful where 

259 scientific applications must minimize the total output error over long calculations. 

260 
142

261 Yet another use for large integers is within arithmetic on polynomials of large characteristic (i.e. $GF(p)[x]$ for large $p$). 
19

262 In fact the library discussed within this text has already been used to form a polynomial basis library\footnote{See \url{http://poly.libtomcrypt.org} for more details.}. 

263 

264 \subsection{Benefits of Multiple Precision Arithmetic} 

265 \index{precision} 

266 The benefit of multiple precision representations over single or fixed precision representations is that 

267 no precision is lost while representing the result of an operation which requires excess precision. For example, 

268 the product of two $n$bit integers requires at least $2n$ bits of precision to be represented faithfully. A multiple 

269 precision algorithm would augment the precision of the destination to accomodate the result while a single precision system 

270 would truncate excess bits to maintain a fixed level of precision. 

271 

272 It is possible to implement algorithms which require large integers with fixed precision algorithms. For example, elliptic 

273 curve cryptography (\textit{ECC}) is often implemented on smartcards by fixing the precision of the integers to the maximum 

274 size the system will ever need. Such an approach can lead to vastly simpler algorithms which can accomodate the 

275 integers required even if the host platform cannot natively accomodate them\footnote{For example, the average smartcard 

276 processor has an 8 bit accumulator.}. However, as efficient as such an approach may be, the resulting source code is not 

277 normally very flexible. It cannot, at runtime, accomodate inputs of higher magnitude than the designer anticipated. 

278 

279 Multiple precision algorithms have the most overhead of any style of arithmetic. For the the most part the 

280 overhead can be kept to a minimum with careful planning, but overall, it is not well suited for most memory starved 

281 platforms. However, multiple precision algorithms do offer the most flexibility in terms of the magnitude of the 

282 inputs. That is, the same algorithms based on multiple precision integers can accomodate any reasonable size input 

283 without the designer's explicit forethought. This leads to lower cost of ownership for the code as it only has to 

284 be written and tested once. 

285 

286 \section{Purpose of This Text} 

287 The purpose of this text is to instruct the reader regarding how to implement efficient multiple precision algorithms. 

288 That is to not only explain a limited subset of the core theory behind the algorithms but also the various ``house keeping'' 

289 elements that are neglected by authors of other texts on the subject. Several well reknowned texts \cite{TAOCPV2,HAC} 

290 give considerably detailed explanations of the theoretical aspects of algorithms and often very little information 

291 regarding the practical implementation aspects. 

292 

293 In most cases how an algorithm is explained and how it is actually implemented are two very different concepts. For 

294 example, the Handbook of Applied Cryptography (\textit{HAC}), algorithm 14.7 on page 594, gives a relatively simple 

295 algorithm for performing multiple precision integer addition. However, the description lacks any discussion concerning 

296 the fact that the two integer inputs may be of differing magnitudes. As a result the implementation is not as simple 

297 as the text would lead people to believe. Similarly the division routine (\textit{algorithm 14.20, pp. 598}) does not 

298 discuss how to handle sign or handle the dividend's decreasing magnitude in the main loop (\textit{step \#3}). 

299 

300 Both texts also do not discuss several key optimal algorithms required such as ``Comba'' and Karatsuba multipliers 

301 and fast modular inversion, which we consider practical oversights. These optimal algorithms are vital to achieve 

302 any form of useful performance in nontrivial applications. 

303 

304 To solve this problem the focus of this text is on the practical aspects of implementing a multiple precision integer 

305 package. As a case study the ``LibTomMath''\footnote{Available at \url{http://math.libtomcrypt.org}} package is used 

306 to demonstrate algorithms with real implementations\footnote{In the ISO C programming language.} that have been field 

307 tested and work very well. The LibTomMath library is freely available on the Internet for all uses and this text 

308 discusses a very large portion of the inner workings of the library. 

309 

310 The algorithms that are presented will always include at least one ``pseudocode'' description followed 

311 by the actual C source code that implements the algorithm. The pseudocode can be used to implement the same 

312 algorithm in other programming languages as the reader sees fit. 

313 

314 This text shall also serve as a walkthrough of the creation of multiple precision algorithms from scratch. Showing 

315 the reader how the algorithms fit together as well as where to start on various taskings. 

316 

317 \section{Discussion and Notation} 

318 \subsection{Notation} 
142

319 A multiple precision integer of $n$digits shall be denoted as $x = (x_{n1}, \ldots, x_1, x_0)_{ \beta }$ and represent 
19

320 the integer $x \equiv \sum_{i=0}^{n1} x_i\beta^i$. The elements of the array $x$ are said to be the radix $\beta$ digits 

321 of the integer. For example, $x = (1,2,3)_{10}$ would represent the integer 

322 $1\cdot 10^2 + 2\cdot10^1 + 3\cdot10^0 = 123$. 

323 

324 \index{mp\_int} 

325 The term ``mp\_int'' shall refer to a composite structure which contains the digits of the integer it represents, as well 

326 as auxilary data required to manipulate the data. These additional members are discussed further in section 

327 \ref{sec:MPINT}. For the purposes of this text a ``multiple precision integer'' and an ``mp\_int'' are assumed to be 

328 synonymous. When an algorithm is specified to accept an mp\_int variable it is assumed the various auxliary data members 

329 are present as well. An expression of the type \textit{variablename.item} implies that it should evaluate to the 

330 member named ``item'' of the variable. For example, a string of characters may have a member ``length'' which would 

331 evaluate to the number of characters in the string. If the string $a$ equals ``hello'' then it follows that 

332 $a.length = 5$. 

333 

334 For certain discussions more generic algorithms are presented to help the reader understand the final algorithm used 

335 to solve a given problem. When an algorithm is described as accepting an integer input it is assumed the input is 

336 a plain integer with no additional multipleprecision members. That is, algorithms that use integers as opposed to 

337 mp\_ints as inputs do not concern themselves with the housekeeping operations required such as memory management. These 

338 algorithms will be used to establish the relevant theory which will subsequently be used to describe a multiple 

339 precision algorithm to solve the same problem. 

340 

341 \subsection{Precision Notation} 
142

342 The variable $\beta$ represents the radix of a single digit of a multiple precision integer and 

343 must be of the form $q^p$ for $q, p \in \Z^+$. A single precision variable must be able to represent integers in 

344 the range $0 \le x < q \beta$ while a double precision variable must be able to represent integers in the range 

345 $0 \le x < q \beta^2$. The extra radix$q$ factor allows additions and subtractions to proceed without truncation of the 

346 carry. Since all modern computers are binary, it is assumed that $q$ is two. 
19

347 

348 \index{mp\_digit} \index{mp\_word} 

349 Within the source code that will be presented for each algorithm, the data type \textbf{mp\_digit} will represent 

350 a single precision integer type, while, the data type \textbf{mp\_word} will represent a double precision integer type. In 

351 several algorithms (notably the Comba routines) temporary results will be stored in arrays of double precision mp\_words. 

352 For the purposes of this text $x_j$ will refer to the $j$'th digit of a single precision array and $\hat x_j$ will refer to 

353 the $j$'th digit of a double precision array. Whenever an expression is to be assigned to a double precision 

354 variable it is assumed that all single precision variables are promoted to double precision during the evaluation. 

355 Expressions that are assigned to a single precision variable are truncated to fit within the precision of a single 

356 precision data type. 

357 

358 For example, if $\beta = 10^2$ a single precision data type may represent a value in the 

359 range $0 \le x < 10^3$, while a double precision data type may represent a value in the range $0 \le x < 10^5$. Let 

360 $a = 23$ and $b = 49$ represent two single precision variables. The single precision product shall be written 

361 as $c \leftarrow a \cdot b$ while the double precision product shall be written as $\hat c \leftarrow a \cdot b$. 

362 In this particular case, $\hat c = 1127$ and $c = 127$. The most significant digit of the product would not fit 

363 in a single precision data type and as a result $c \ne \hat c$. 

364 

365 \subsection{Algorithm Inputs and Outputs} 

366 Within the algorithm descriptions all variables are assumed to be scalars of either single or double precision 

367 as indicated. The only exception to this rule is when variables have been indicated to be of type mp\_int. This 

368 distinction is important as scalars are often used as array indicies and various other counters. 

369 

370 \subsection{Mathematical Expressions} 

371 The $\lfloor \mbox{ } \rfloor$ brackets imply an expression truncated to an integer not greater than the expression 

372 itself. For example, $\lfloor 5.7 \rfloor = 5$. Similarly the $\lceil \mbox{ } \rceil$ brackets imply an expression 

373 rounded to an integer not less than the expression itself. For example, $\lceil 5.1 \rceil = 6$. Typically when 

374 the $/$ division symbol is used the intention is to perform an integer division with truncation. For example, 

375 $5/2 = 2$ which will often be written as $\lfloor 5/2 \rfloor = 2$ for clarity. When an expression is written as a 

376 fraction a real value division is implied, for example ${5 \over 2} = 2.5$. 

377 
142

378 The norm of a multiple precision integer, for example $\vert \vert x \vert \vert$, will be used to represent the number of digits in the representation 
19

379 of the integer. For example, $\vert \vert 123 \vert \vert = 3$ and $\vert \vert 79452 \vert \vert = 5$. 

380 

381 \subsection{Work Effort} 

382 \index{bigOh} 

383 To measure the efficiency of the specified algorithms, a modified bigOh notation is used. In this system all 

384 single precision operations are considered to have the same cost\footnote{Except where explicitly noted.}. 

385 That is a single precision addition, multiplication and division are assumed to take the same time to 

386 complete. While this is generally not true in practice, it will simplify the discussions considerably. 

387 

388 Some algorithms have slight advantages over others which is why some constants will not be removed in 

389 the notation. For example, a normal baseline multiplication (section \ref{sec:basemult}) requires $O(n^2)$ work while a 

390 baseline squaring (section \ref{sec:basesquare}) requires $O({{n^2 + n}\over 2})$ work. In standard bigOh notation these 

391 would both be said to be equivalent to $O(n^2)$. However, 

392 in the context of the this text this is not the case as the magnitude of the inputs will typically be rather small. As a 

393 result small constant factors in the work effort will make an observable difference in algorithm efficiency. 

394 

395 All of the algorithms presented in this text have a polynomial time work level. That is, of the form 

396 $O(n^k)$ for $n, k \in \Z^{+}$. This will help make useful comparisons in terms of the speed of the algorithms and how 

397 various optimizations will help pay off in the long run. 

398 

399 \section{Exercises} 

400 Within the more advanced chapters a section will be set aside to give the reader some challenging exercises related to 

401 the discussion at hand. These exercises are not designed to be prize winning problems, but instead to be thought 

402 provoking. Wherever possible the problems are forward minded, stating problems that will be answered in subsequent 

403 chapters. The reader is encouraged to finish the exercises as they appear to get a better understanding of the 

404 subject material. 

405 

406 That being said, the problems are designed to affirm knowledge of a particular subject matter. Students in particular 

407 are encouraged to verify they can answer the problems correctly before moving on. 

408 

409 Similar to the exercises of \cite[pp. ix]{TAOCPV2} these exercises are given a scoring system based on the difficulty of 

410 the problem. However, unlike \cite{TAOCPV2} the problems do not get nearly as hard. The scoring of these 

411 exercises ranges from one (the easiest) to five (the hardest). The following table sumarizes the 

412 scoring system used. 

413 

414 \begin{figure}[here] 

415 \begin{center} 

416 \begin{small} 

417 \begin{tabular}{cl} 

418 \hline $\left [ 1 \right ]$ & An easy problem that should only take the reader a manner of \\ 

419 & minutes to solve. Usually does not involve much computer time \\ 

420 & to solve. \\ 

421 \hline $\left [ 2 \right ]$ & An easy problem that involves a marginal amount of computer \\ 

422 & time usage. Usually requires a program to be written to \\ 

423 & solve the problem. \\ 

424 \hline $\left [ 3 \right ]$ & A moderately hard problem that requires a nontrivial amount \\ 

425 & of work. Usually involves trivial research and development of \\ 

426 & new theory from the perspective of a student. \\ 

427 \hline $\left [ 4 \right ]$ & A moderately hard problem that involves a nontrivial amount \\ 

428 & of work and research, the solution to which will demonstrate \\ 

429 & a higher mastery of the subject matter. \\ 

430 \hline $\left [ 5 \right ]$ & A hard problem that involves concepts that are difficult for a \\ 

431 & novice to solve. Solutions to these problems will demonstrate a \\ 

432 & complete mastery of the given subject. \\ 

433 \hline 

434 \end{tabular} 

435 \end{small} 

436 \end{center} 

437 \caption{Exercise Scoring System} 

438 \end{figure} 

439 

440 Problems at the first level are meant to be simple questions that the reader can answer quickly without programming a solution or 

441 devising new theory. These problems are quick tests to see if the material is understood. Problems at the second level 

442 are also designed to be easy but will require a program or algorithm to be implemented to arrive at the answer. These 

443 two levels are essentially entry level questions. 

444 

445 Problems at the third level are meant to be a bit more difficult than the first two levels. The answer is often 

446 fairly obvious but arriving at an exacting solution requires some thought and skill. These problems will almost always 

447 involve devising a new algorithm or implementing a variation of another algorithm previously presented. Readers who can 

448 answer these questions will feel comfortable with the concepts behind the topic at hand. 

449 

450 Problems at the fourth level are meant to be similar to those of the level three questions except they will require 

451 additional research to be completed. The reader will most likely not know the answer right away, nor will the text provide 

452 the exact details of the answer until a subsequent chapter. 

453 

454 Problems at the fifth level are meant to be the hardest 

455 problems relative to all the other problems in the chapter. People who can correctly answer fifth level problems have a 

456 mastery of the subject matter at hand. 

457 

458 Often problems will be tied together. The purpose of this is to start a chain of thought that will be discussed in future chapters. The reader 

459 is encouraged to answer the followup problems and try to draw the relevance of problems. 

460 

461 \section{Introduction to LibTomMath} 

462 

463 \subsection{What is LibTomMath?} 

464 LibTomMath is a free and open source multiple precision integer library written entirely in portable ISO C. By portable it 

465 is meant that the library does not contain any code that is computer platform dependent or otherwise problematic to use on 

466 any given platform. 

467 

468 The library has been successfully tested under numerous operating systems including Unix\footnote{All of these 

469 trademarks belong to their respective rightful owners.}, MacOS, Windows, Linux, PalmOS and on standalone hardware such 

470 as the Gameboy Advance. The library is designed to contain enough functionality to be able to develop applications such 

471 as public key cryptosystems and still maintain a relatively small footprint. 

472 

473 \subsection{Goals of LibTomMath} 

474 

475 Libraries which obtain the most efficiency are rarely written in a high level programming language such as C. However, 

476 even though this library is written entirely in ISO C, considerable care has been taken to optimize the algorithm implementations within the 

477 library. Specifically the code has been written to work well with the GNU C Compiler (\textit{GCC}) on both x86 and ARM 

478 processors. Wherever possible, highly efficient algorithms, such as Karatsuba multiplication, sliding window 

479 exponentiation and Montgomery reduction have been provided to make the library more efficient. 

480 

481 Even with the nearly optimal and specialized algorithms that have been included the Application Programing Interface 

482 (\textit{API}) has been kept as simple as possible. Often generic place holder routines will make use of specialized 

483 algorithms automatically without the developer's specific attention. One such example is the generic multiplication 

484 algorithm \textbf{mp\_mul()} which will automatically use ToomCook, Karatsuba, Comba or baseline multiplication 

485 based on the magnitude of the inputs and the configuration of the library. 

486 

487 Making LibTomMath as efficient as possible is not the only goal of the LibTomMath project. Ideally the library should 

488 be source compatible with another popular library which makes it more attractive for developers to use. In this case the 

489 MPI library was used as a API template for all the basic functions. MPI was chosen because it is another library that fits 

490 in the same niche as LibTomMath. Even though LibTomMath uses MPI as the template for the function names and argument 

491 passing conventions, it has been written from scratch by Tom St Denis. 

492 

493 The project is also meant to act as a learning tool for students, the logic being that no easytofollow ``bignum'' 

494 library exists which can be used to teach computer science students how to perform fast and reliable multiple precision 

495 integer arithmetic. To this end the source code has been given quite a few comments and algorithm discussion points. 

496 

497 \section{Choice of LibTomMath} 

498 LibTomMath was chosen as the case study of this text not only because the author of both projects is one and the same but 

499 for more worthy reasons. Other libraries such as GMP \cite{GMP}, MPI \cite{MPI}, LIP \cite{LIP} and OpenSSL 

500 \cite{OPENSSL} have multiple precision integer arithmetic routines but would not be ideal for this text for 

501 reasons that will be explained in the following subsections. 

502 

503 \subsection{Code Base} 

504 The LibTomMath code base is all portable ISO C source code. This means that there are no platform dependent conditional 

505 segments of code littered throughout the source. This clean and uncluttered approach to the library means that a 

506 developer can more readily discern the true intent of a given section of source code without trying to keep track of 

507 what conditional code will be used. 

508 

509 The code base of LibTomMath is well organized. Each function is in its own separate source code file 

510 which allows the reader to find a given function very quickly. On average there are $76$ lines of code per source 

511 file which makes the source very easily to follow. By comparison MPI and LIP are single file projects making code tracing 

512 very hard. GMP has many conditional code segments which also hinder tracing. 

513 

514 When compiled with GCC for the x86 processor and optimized for speed the entire library is approximately $100$KiB\footnote{The notation ``KiB'' means $2^{10}$ octets, similarly ``MiB'' means $2^{20}$ octets.} 

515 which is fairly small compared to GMP (over $250$KiB). LibTomMath is slightly larger than MPI (which compiles to about 

516 $50$KiB) but LibTomMath is also much faster and more complete than MPI. 

517 

518 \subsection{API Simplicity} 

519 LibTomMath is designed after the MPI library and shares the API design. Quite often programs that use MPI will build 

520 with LibTomMath without change. The function names correlate directly to the action they perform. Almost all of the 

521 functions share the same parameter passing convention. The learning curve is fairly shallow with the API provided 

522 which is an extremely valuable benefit for the student and developer alike. 

523 

524 The LIP library is an example of a library with an API that is awkward to work with. LIP uses function names that are often ``compressed'' to 

525 illegible short hand. LibTomMath does not share this characteristic. 

526 

527 The GMP library also does not return error codes. Instead it uses a POSIX.1 \cite{POSIX1} signal system where errors 

528 are signaled to the host application. This happens to be the fastest approach but definitely not the most versatile. In 

529 effect a math error (i.e. invalid input, heap error, etc) can cause a program to stop functioning which is definitely 

530 undersireable in many situations. 

531 

532 \subsection{Optimizations} 

533 While LibTomMath is certainly not the fastest library (GMP often beats LibTomMath by a factor of two) it does 

534 feature a set of optimal algorithms for tasks such as modular reduction, exponentiation, multiplication and squaring. GMP 

535 and LIP also feature such optimizations while MPI only uses baseline algorithms with no optimizations. GMP lacks a few 

536 of the additional modular reduction optimizations that LibTomMath features\footnote{At the time of this writing GMP 

537 only had Barrett and Montgomery modular reduction algorithms.}. 

538 

539 LibTomMath is almost always an order of magnitude faster than the MPI library at computationally expensive tasks such as modular 

540 exponentiation. In the grand scheme of ``bignum'' libraries LibTomMath is faster than the average library and usually 

541 slower than the best libraries such as GMP and OpenSSL by only a small factor. 

542 

543 \subsection{Portability and Stability} 

544 LibTomMath will build ``out of the box'' on any platform equipped with a modern version of the GNU C Compiler 

545 (\textit{GCC}). This means that without changes the library will build without configuration or setting up any 

546 variables. LIP and MPI will build ``out of the box'' as well but have numerous known bugs. Most notably the author of 

547 MPI has recently stopped working on his library and LIP has long since been discontinued. 

548 

549 GMP requires a configuration script to run and will not build out of the box. GMP and LibTomMath are still in active 

550 development and are very stable across a variety of platforms. 

551 

552 \subsection{Choice} 

553 LibTomMath is a relatively compact, well documented, highly optimized and portable library which seems only natural for 

554 the case study of this text. Various source files from the LibTomMath project will be included within the text. However, 

555 the reader is encouraged to download their own copy of the library to actually be able to work with the library. 

556 

557 \chapter{Getting Started} 

558 \section{Library Basics} 

559 The trick to writing any useful library of source code is to build a solid foundation and work outwards from it. First, 

560 a problem along with allowable solution parameters should be identified and analyzed. In this particular case the 

561 inability to accomodate multiple precision integers is the problem. Futhermore, the solution must be written 

562 as portable source code that is reasonably efficient across several different computer platforms. 

563 

564 After a foundation is formed the remainder of the library can be designed and implemented in a hierarchical fashion. 

565 That is, to implement the lowest level dependencies first and work towards the most abstract functions last. For example, 

566 before implementing a modular exponentiation algorithm one would implement a modular reduction algorithm. 

567 By building outwards from a base foundation instead of using a parallel design methodology the resulting project is 

568 highly modular. Being highly modular is a desirable property of any project as it often means the resulting product 

569 has a small footprint and updates are easy to perform. 

570 
142

571 Usually when I start a project I will begin with the header files. I define the data types I think I will need and 
19

572 prototype the initial functions that are not dependent on other functions (within the library). After I 

573 implement these base functions I prototype more dependent functions and implement them. The process repeats until 

574 I implement all of the functions I require. For example, in the case of LibTomMath I implemented functions such as 

575 mp\_init() well before I implemented mp\_mul() and even further before I implemented mp\_exptmod(). As an example as to 

576 why this design works note that the Karatsuba and ToomCook multipliers were written \textit{after} the 

577 dependent function mp\_exptmod() was written. Adding the new multiplication algorithms did not require changes to the 

578 mp\_exptmod() function itself and lowered the total cost of ownership (\textit{so to speak}) and of development 

579 for new algorithms. This methodology allows new algorithms to be tested in a complete framework with relative ease. 

580 

581 FIGU,design_process,Design Flow of the First Few Original LibTomMath Functions. 

582 

583 Only after the majority of the functions were in place did I pursue a less hierarchical approach to auditing and optimizing 

584 the source code. For example, one day I may audit the multipliers and the next day the polynomial basis functions. 

585 

586 It only makes sense to begin the text with the preliminary data types and support algorithms required as well. 

587 This chapter discusses the core algorithms of the library which are the dependents for every other algorithm. 

588 

589 \section{What is a Multiple Precision Integer?} 

590 Recall that most programming languages, in particular ISO C \cite{ISOC}, only have fixed precision data types that on their own cannot 

591 be used to represent values larger than their precision will allow. The purpose of multiple precision algorithms is 

592 to use fixed precision data types to create and manipulate multiple precision integers which may represent values 

593 that are very large. 

594 

595 As a well known analogy, school children are taught how to form numbers larger than nine by prepending more radix ten digits. In the decimal system 

596 the largest single digit value is $9$. However, by concatenating digits together larger numbers may be represented. Newly prepended digits 

597 (\textit{to the left}) are said to be in a different power of ten column. That is, the number $123$ can be described as having a $1$ in the hundreds 

598 column, $2$ in the tens column and $3$ in the ones column. Or more formally $123 = 1 \cdot 10^2 + 2 \cdot 10^1 + 3 \cdot 10^0$. Computer based 

599 multiple precision arithmetic is essentially the same concept. Larger integers are represented by adjoining fixed 

600 precision computer words with the exception that a different radix is used. 

601 

602 What most people probably do not think about explicitly are the various other attributes that describe a multiple precision 

603 integer. For example, the integer $154_{10}$ has two immediately obvious properties. First, the integer is positive, 

604 that is the sign of this particular integer is positive as opposed to negative. Second, the integer has three digits in 

605 its representation. There is an additional property that the integer posesses that does not concern pencilandpaper 

606 arithmetic. The third property is how many digits placeholders are available to hold the integer. 

607 

608 The human analogy of this third property is ensuring there is enough space on the paper to write the integer. For example, 

609 if one starts writing a large number too far to the right on a piece of paper they will have to erase it and move left. 

610 Similarly, computer algorithms must maintain strict control over memory usage to ensure that the digits of an integer 

611 will not exceed the allowed boundaries. These three properties make up what is known as a multiple precision 

612 integer or mp\_int for short. 

613 

614 \subsection{The mp\_int Structure} 

615 \label{sec:MPINT} 

616 The mp\_int structure is the ISO C based manifestation of what represents a multiple precision integer. The ISO C standard does not provide for 

617 any such data type but it does provide for making composite data types known as structures. The following is the structure definition 

618 used within LibTomMath. 

619 

620 \index{mp\_int} 
142

621 \begin{figure}[here] 

622 \begin{center} 

623 \begin{small} 

624 %\begin{verbatim} 

625 \begin{tabular}{l} 

626 \hline 

627 typedef struct \{ \\ 

628 \hspace{3mm}int used, alloc, sign;\\ 

629 \hspace{3mm}mp\_digit *dp;\\ 

630 \} \textbf{mp\_int}; \\ 

631 \hline 

632 \end{tabular} 

633 %\end{verbatim} 

634 \end{small} 

635 \caption{The mp\_int Structure} 

636 \label{fig:mpint} 

637 \end{center} 

638 \end{figure} 

639 

640 The mp\_int structure (fig. \ref{fig:mpint}) can be broken down as follows. 
19

641 

642 \begin{enumerate} 

643 \item The \textbf{used} parameter denotes how many digits of the array \textbf{dp} contain the digits used to represent 

644 a given integer. The \textbf{used} count must be positive (or zero) and may not exceed the \textbf{alloc} count. 

645 

646 \item The \textbf{alloc} parameter denotes how 

647 many digits are available in the array to use by functions before it has to increase in size. When the \textbf{used} count 

648 of a result would exceed the \textbf{alloc} count all of the algorithms will automatically increase the size of the 

649 array to accommodate the precision of the result. 

650 

651 \item The pointer \textbf{dp} points to a dynamically allocated array of digits that represent the given multiple 

652 precision integer. It is padded with $(\textbf{alloc}  \textbf{used})$ zero digits. The array is maintained in a least 

653 significant digit order. As a pencil and paper analogy the array is organized such that the right most digits are stored 

654 first starting at the location indexed by zero\footnote{In C all arrays begin at zero.} in the array. For example, 

655 if \textbf{dp} contains $\lbrace a, b, c, \ldots \rbrace$ where \textbf{dp}$_0 = a$, \textbf{dp}$_1 = b$, \textbf{dp}$_2 = c$, $\ldots$ then 

656 it would represent the integer $a + b\beta + c\beta^2 + \ldots$ 

657 

658 \index{MP\_ZPOS} \index{MP\_NEG} 

659 \item The \textbf{sign} parameter denotes the sign as either zero/positive (\textbf{MP\_ZPOS}) or negative (\textbf{MP\_NEG}). 

660 \end{enumerate} 

661 

662 \subsubsection{Valid mp\_int Structures} 

663 Several rules are placed on the state of an mp\_int structure and are assumed to be followed for reasons of efficiency. 

664 The only exceptions are when the structure is passed to initialization functions such as mp\_init() and mp\_init\_copy(). 

665 

666 \begin{enumerate} 

667 \item The value of \textbf{alloc} may not be less than one. That is \textbf{dp} always points to a previously allocated 

668 array of digits. 

669 \item The value of \textbf{used} may not exceed \textbf{alloc} and must be greater than or equal to zero. 

670 \item The value of \textbf{used} implies the digit at index $(used  1)$ of the \textbf{dp} array is nonzero. That is, 

671 leading zero digits in the most significant positions must be trimmed. 

672 \begin{enumerate} 

673 \item Digits in the \textbf{dp} array at and above the \textbf{used} location must be zero. 

674 \end{enumerate} 

675 \item The value of \textbf{sign} must be \textbf{MP\_ZPOS} if \textbf{used} is zero; 

676 this represents the mp\_int value of zero. 

677 \end{enumerate} 

678 

679 \section{Argument Passing} 

680 A convention of argument passing must be adopted early on in the development of any library. Making the function 

681 prototypes consistent will help eliminate many headaches in the future as the library grows to significant complexity. 

682 In LibTomMath the multiple precision integer functions accept parameters from left to right as pointers to mp\_int 

683 structures. That means that the source (input) operands are placed on the left and the destination (output) on the right. 

684 Consider the following examples. 

685 

686 \begin{verbatim} 

687 mp_mul(&a, &b, &c); /* c = a * b */ 

688 mp_add(&a, &b, &a); /* a = a + b */ 

689 mp_sqr(&a, &b); /* b = a * a */ 

690 \end{verbatim} 

691 

692 The left to right order is a fairly natural way to implement the functions since it lets the developer read aloud the 

693 functions and make sense of them. For example, the first function would read ``multiply a and b and store in c''. 

694 

695 Certain libraries (\textit{LIP by Lenstra for instance}) accept parameters the other way around, to mimic the order 

696 of assignment expressions. That is, the destination (output) is on the left and arguments (inputs) are on the right. In 

697 truth, it is entirely a matter of preference. In the case of LibTomMath the convention from the MPI library has been 

698 adopted. 

699 

700 Another very useful design consideration, provided for in LibTomMath, is whether to allow argument sources to also be a 

701 destination. For example, the second example (\textit{mp\_add}) adds $a$ to $b$ and stores in $a$. This is an important 

702 feature to implement since it allows the calling functions to cut down on the number of variables it must maintain. 

703 However, to implement this feature specific care has to be given to ensure the destination is not modified before the 

704 source is fully read. 

705 

706 \section{Return Values} 

707 A well implemented application, no matter what its purpose, should trap as many runtime errors as possible and return them 

708 to the caller. By catching runtime errors a library can be guaranteed to prevent undefined behaviour. However, the end 

709 developer can still manage to cause a library to crash. For example, by passing an invalid pointer an application may 

710 fault by dereferencing memory not owned by the application. 

711 

712 In the case of LibTomMath the only errors that are checked for are related to inappropriate inputs (division by zero for 

713 instance) and memory allocation errors. It will not check that the mp\_int passed to any function is valid nor 

714 will it check pointers for validity. Any function that can cause a runtime error will return an error code as an 
142

715 \textbf{int} data type with one of the following values (fig \ref{fig:errcodes}). 
19

716 

717 \index{MP\_OKAY} \index{MP\_VAL} \index{MP\_MEM} 
142

718 \begin{figure}[here] 
19

719 \begin{center} 

720 \begin{tabular}{ll} 

721 \hline \textbf{Value} & \textbf{Meaning} \\ 

722 \hline \textbf{MP\_OKAY} & The function was successful \\ 

723 \hline \textbf{MP\_VAL} & One of the input value(s) was invalid \\ 

724 \hline \textbf{MP\_MEM} & The function ran out of heap memory \\ 

725 \hline 

726 \end{tabular} 

727 \end{center} 
142

728 \caption{LibTomMath Error Codes} 

729 \label{fig:errcodes} 

730 \end{figure} 
19

731 

732 When an error is detected within a function it should free any memory it allocated, often during the initialization of 

733 temporary mp\_ints, and return as soon as possible. The goal is to leave the system in the same state it was when the 

734 function was called. Error checking with this style of API is fairly simple. 

735 

736 \begin{verbatim} 

737 int err; 

738 if ((err = mp_add(&a, &b, &c)) != MP_OKAY) { 

739 printf("Error: %s\n", mp_error_to_string(err)); 

740 exit(EXIT_FAILURE); 

741 } 

742 \end{verbatim} 

743 

744 The GMP \cite{GMP} library uses C style \textit{signals} to flag errors which is of questionable use. Not all errors are fatal 

745 and it was not deemed ideal by the author of LibTomMath to force developers to have signal handlers for such cases. 

746 

747 \section{Initialization and Clearing} 

748 The logical starting point when actually writing multiple precision integer functions is the initialization and 

749 clearing of the mp\_int structures. These two algorithms will be used by the majority of the higher level algorithms. 

750 

751 Given the basic mp\_int structure an initialization routine must first allocate memory to hold the digits of 

752 the integer. Often it is optimal to allocate a sufficiently large preset number of digits even though 

753 the initial integer will represent zero. If only a single digit were allocated quite a few subsequent reallocations 

754 would occur when operations are performed on the integers. There is a tradeoff between how many default digits to allocate 

755 and how many reallocations are tolerable. Obviously allocating an excessive amount of digits initially will waste 

756 memory and become unmanageable. 

757 

758 If the memory for the digits has been successfully allocated then the rest of the members of the structure must 

759 be initialized. Since the initial state of an mp\_int is to represent the zero integer, the allocated digits must be set 

760 to zero. The \textbf{used} count set to zero and \textbf{sign} set to \textbf{MP\_ZPOS}. 

761 

762 \subsection{Initializing an mp\_int} 

763 An mp\_int is said to be initialized if it is set to a valid, preferably default, state such that all of the members of the 

764 structure are set to valid values. The mp\_init algorithm will perform such an action. 

765 
142

766 \index{mp\_init} 
19

767 \begin{figure}[here] 

768 \begin{center} 

769 \begin{tabular}{l} 

770 \hline Algorithm \textbf{mp\_init}. \\ 

771 \textbf{Input}. An mp\_int $a$ \\ 

772 \textbf{Output}. Allocate memory and initialize $a$ to a known valid mp\_int state. \\ 

773 \hline \\ 

774 1. Allocate memory for \textbf{MP\_PREC} digits. \\ 

775 2. If the allocation failed return(\textit{MP\_MEM}) \\ 

776 3. for $n$ from $0$ to $MP\_PREC  1$ do \\ 

777 \hspace{3mm}3.1 $a_n \leftarrow 0$\\ 

778 4. $a.sign \leftarrow MP\_ZPOS$\\ 

779 5. $a.used \leftarrow 0$\\ 

780 6. $a.alloc \leftarrow MP\_PREC$\\ 

781 7. Return(\textit{MP\_OKAY})\\ 

782 \hline 

783 \end{tabular} 

784 \end{center} 

785 \caption{Algorithm mp\_init} 

786 \end{figure} 

787 

788 \textbf{Algorithm mp\_init.} 
142

789 The purpose of this function is to initialize an mp\_int structure so that the rest of the library can properly 

790 manipulte it. It is assumed that the input may not have had any of its members previously initialized which is certainly 

791 a valid assumption if the input resides on the stack. 

792 

793 Before any of the members such as \textbf{sign}, \textbf{used} or \textbf{alloc} are initialized the memory for 

794 the digits is allocated. If this fails the function returns before setting any of the other members. The \textbf{MP\_PREC} 

795 name represents a constant\footnote{Defined in the ``tommath.h'' header file within LibTomMath.} 

796 used to dictate the minimum precision of newly initialized mp\_int integers. Ideally, it is at least equal to the smallest 

797 precision number you'll be working with. 

798 

799 Allocating a block of digits at first instead of a single digit has the benefit of lowering the number of usually slow 

800 heap operations later functions will have to perform in the future. If \textbf{MP\_PREC} is set correctly the slack 

801 memory and the number of heap operations will be trivial. 

802 

803 Once the allocation has been made the digits have to be set to zero as well as the \textbf{used}, \textbf{sign} and 

804 \textbf{alloc} members initialized. This ensures that the mp\_int will always represent the default state of zero regardless 

805 of the original condition of the input. 
19

806 

807 \textbf{Remark.} 

808 This function introduces the idiosyncrasy that all iterative loops, commonly initiated with the ``for'' keyword, iterate incrementally 

809 when the ``to'' keyword is placed between two expressions. For example, ``for $a$ from $b$ to $c$ do'' means that 

810 a subsequent expression (or body of expressions) are to be evaluated upto $c  b$ times so long as $b \le c$. In each 

811 iteration the variable $a$ is substituted for a new integer that lies inclusively between $b$ and $c$. If $b > c$ occured 

812 the loop would not iterate. By contrast if the ``downto'' keyword were used in place of ``to'' the loop would iterate 

813 decrementally. 

814 

815 EXAM,bn_mp_init.c 

816 

817 One immediate observation of this initializtion function is that it does not return a pointer to a mp\_int structure. It 

818 is assumed that the caller has already allocated memory for the mp\_int structure, typically on the application stack. The 

819 call to mp\_init() is used only to initialize the members of the structure to a known default state. 

820 
142

821 Here we see (line @23,[email protected]) the memory allocation is performed first. This allows us to exit cleanly and quickly 

822 if there is an error. If the allocation fails the routine will return \textbf{MP\_MEM} to the caller to indicate there 

823 was a memory error. The function XMALLOC is what actually allocates the memory. Technically XMALLOC is not a function 

824 but a macro defined in ``tommath.h``. By default, XMALLOC will evaluate to malloc() which is the C library's builtin 

825 memory allocation routine. 

826 

827 In order to assure the mp\_int is in a known state the digits must be set to zero. On most platforms this could have been 

828 accomplished by using calloc() instead of malloc(). However, to correctly initialize a integer type to a given value in a 

829 portable fashion you have to actually assign the value. The for loop (line @28,[email protected]) performs this required 

830 operation. 

831 

832 After the memory has been successfully initialized the remainder of the members are initialized 
19

833 (lines @29,[email protected] through @31,[email protected]) to their respective default states. At this point the algorithm has succeeded and 
142

834 a success code is returned to the calling function. If this function returns \textbf{MP\_OKAY} it is safe to assume the 

835 mp\_int structure has been properly initialized and is safe to use with other functions within the library. 
19

836 

837 \subsection{Clearing an mp\_int} 

838 When an mp\_int is no longer required by the application, the memory that has been allocated for its digits must be 

839 returned to the application's memory pool with the mp\_clear algorithm. 

840 

841 \begin{figure}[here] 

842 \begin{center} 

843 \begin{tabular}{l} 

844 \hline Algorithm \textbf{mp\_clear}. \\ 

845 \textbf{Input}. An mp\_int $a$ \\ 
142

846 \textbf{Output}. The memory for $a$ shall be deallocated. \\ 
19

847 \hline \\ 

848 1. If $a$ has been previously freed then return(\textit{MP\_OKAY}). \\ 

849 2. for $n$ from 0 to $a.used  1$ do \\ 

850 \hspace{3mm}2.1 $a_n \leftarrow 0$ \\ 

851 3. Free the memory allocated for the digits of $a$. \\ 

852 4. $a.used \leftarrow 0$ \\ 

853 5. $a.alloc \leftarrow 0$ \\ 

854 6. $a.sign \leftarrow MP\_ZPOS$ \\ 

855 7. Return(\textit{MP\_OKAY}). \\ 

856 \hline 

857 \end{tabular} 

858 \end{center} 

859 \caption{Algorithm mp\_clear} 

860 \end{figure} 

861 

862 \textbf{Algorithm mp\_clear.} 
142

863 This algorithm accomplishes two goals. First, it clears the digits and the other mp\_int members. This ensures that 

864 if a developer accidentally reuses a cleared structure it is less likely to cause problems. The second goal 

865 is to free the allocated memory. 

866 

867 The logic behind the algorithm is extended by marking cleared mp\_int structures so that subsequent calls to this 

868 algorithm will not try to free the memory multiple times. Cleared mp\_ints are detectable by having a predefined invalid 

869 digit pointer \textbf{dp} setting. 

870 

871 Once an mp\_int has been cleared the mp\_int structure is no longer in a valid state for any other algorithm 
19

872 with the exception of algorithms mp\_init, mp\_init\_copy, mp\_init\_size and mp\_clear. 

873 

874 EXAM,bn_mp_clear.c 

875 
142

876 The algorithm only operates on the mp\_int if it hasn't been previously cleared. The if statement (line @23,a>dp != [email protected]) 

877 checks to see if the \textbf{dp} member is not \textbf{NULL}. If the mp\_int is a valid mp\_int then \textbf{dp} cannot be 

878 \textbf{NULL} in which case the if statement will evaluate to true. 

879 

880 The digits of the mp\_int are cleared by the for loop (line @25,[email protected]) which assigns a zero to every digit. Similar to mp\_init() 

881 the digits are assigned zero instead of using block memory operations (such as memset()) since this is more portable. 

882 

883 The digits are deallocated off the heap via the XFREE macro. Similar to XMALLOC the XFREE macro actually evaluates to 

884 a standard C library function. In this case the free() function. Since free() only deallocates the memory the pointer 

885 still has to be reset to \textbf{NULL} manually (line @33,[email protected]). 

886 

887 Now that the digits have been cleared and deallocated the other members are set to their final values (lines @34,= [email protected] and @35,[email protected]). 
19

888 

889 \section{Maintenance Algorithms} 

890 

891 The previous sections describes how to initialize and clear an mp\_int structure. To further support operations 

892 that are to be performed on mp\_int structures (such as addition and multiplication) the dependent algorithms must be 

893 able to augment the precision of an mp\_int and 

894 initialize mp\_ints with differing initial conditions. 

895 

896 These algorithms complete the set of low level algorithms required to work with mp\_int structures in the higher level 

897 algorithms such as addition, multiplication and modular exponentiation. 

898 

899 \subsection{Augmenting an mp\_int's Precision} 

900 When storing a value in an mp\_int structure, a sufficient number of digits must be available to accomodate the entire 

901 result of an operation without loss of precision. Quite often the size of the array given by the \textbf{alloc} member 

902 is large enough to simply increase the \textbf{used} digit count. However, when the size of the array is too small it 

903 must be resized appropriately to accomodate the result. The mp\_grow algorithm will provide this functionality. 

904 

905 \newpage\begin{figure}[here] 

906 \begin{center} 

907 \begin{tabular}{l} 

908 \hline Algorithm \textbf{mp\_grow}. \\ 

909 \textbf{Input}. An mp\_int $a$ and an integer $b$. \\ 

910 \textbf{Output}. $a$ is expanded to accomodate $b$ digits. \\ 

911 \hline \\ 

912 1. if $a.alloc \ge b$ then return(\textit{MP\_OKAY}) \\ 

913 2. $u \leftarrow b\mbox{ (mod }MP\_PREC\mbox{)}$ \\ 

914 3. $v \leftarrow b + 2 \cdot MP\_PREC  u$ \\ 
142

915 4. Reallocate the array of digits $a$ to size $v$ \\ 
19

916 5. If the allocation failed then return(\textit{MP\_MEM}). \\ 

917 6. for n from a.alloc to $v  1$ do \\ 

918 \hspace{+3mm}6.1 $a_n \leftarrow 0$ \\ 

919 7. $a.alloc \leftarrow v$ \\ 

920 8. Return(\textit{MP\_OKAY}) \\ 

921 \hline 

922 \end{tabular} 

923 \end{center} 

924 \caption{Algorithm mp\_grow} 

925 \end{figure} 

926 

927 \textbf{Algorithm mp\_grow.} 

928 It is ideal to prevent reallocations from being performed if they are not required (step one). This is useful to 

929 prevent mp\_ints from growing excessively in code that erroneously calls mp\_grow. 

930 

931 The requested digit count is padded up to next multiple of \textbf{MP\_PREC} plus an additional \textbf{MP\_PREC} (steps two and three). 

932 This helps prevent many trivial reallocations that would grow an mp\_int by trivially small values. 

933 

934 It is assumed that the reallocation (step four) leaves the lower $a.alloc$ digits of the mp\_int intact. This is much 

935 akin to how the \textit{realloc} function from the standard C library works. Since the newly allocated digits are 

936 assumed to contain undefined values they are initially set to zero. 

937 

938 EXAM,bn_mp_grow.c 

939 
142

940 A quick optimization is to first determine if a memory reallocation is required at all. The if statement (line @23,[email protected]) checks 

941 if the \textbf{alloc} member of the mp\_int is smaller than the requested digit count. If the count is not larger than \textbf{alloc} 

942 the function skips the reallocation part thus saving time. 

943 

944 When a reallocation is performed it is turned into an optimal request to save time in the future. The requested digit count is 

945 padded upwards to 2nd multiple of \textbf{MP\_PREC} larger than \textbf{alloc} (line @25, [email protected]). The XREALLOC function is used 

946 to reallocate the memory. As per the other functions XREALLOC is actually a macro which evaluates to realloc by default. The realloc 

947 function leaves the base of the allocation intact which means the first \textbf{alloc} digits of the mp\_int are the same as before 

948 the reallocation. All that is left is to clear the newly allocated digits and return. 

949 

950 Note that the reallocation result is actually stored in a temporary pointer $tmp$. This is to allow this function to return 

951 an error with a valid pointer. Earlier releases of the library stored the result of XREALLOC into the mp\_int $a$. That would 

952 result in a memory leak if XREALLOC ever failed. 
19

953 

954 \subsection{Initializing Variable Precision mp\_ints} 

955 Occasionally the number of digits required will be known in advance of an initialization, based on, for example, the size 

956 of input mp\_ints to a given algorithm. The purpose of algorithm mp\_init\_size is similar to mp\_init except that it 

957 will allocate \textit{at least} a specified number of digits. 

958 

959 \begin{figure}[here] 

960 \begin{small} 

961 \begin{center} 

962 \begin{tabular}{l} 

963 \hline Algorithm \textbf{mp\_init\_size}. \\ 

964 \textbf{Input}. An mp\_int $a$ and the requested number of digits $b$. \\ 

965 \textbf{Output}. $a$ is initialized to hold at least $b$ digits. \\ 

966 \hline \\ 

967 1. $u \leftarrow b \mbox{ (mod }MP\_PREC\mbox{)}$ \\ 

968 2. $v \leftarrow b + 2 \cdot MP\_PREC  u$ \\ 

969 3. Allocate $v$ digits. \\ 

970 4. for $n$ from $0$ to $v  1$ do \\ 

971 \hspace{3mm}4.1 $a_n \leftarrow 0$ \\ 

972 5. $a.sign \leftarrow MP\_ZPOS$\\ 

973 6. $a.used \leftarrow 0$\\ 

974 7. $a.alloc \leftarrow v$\\ 

975 8. Return(\textit{MP\_OKAY})\\ 

976 \hline 

977 \end{tabular} 

978 \end{center} 

979 \end{small} 

980 \caption{Algorithm mp\_init\_size} 

981 \end{figure} 

982 

983 \textbf{Algorithm mp\_init\_size.} 

984 This algorithm will initialize an mp\_int structure $a$ like algorithm mp\_init with the exception that the number of 

985 digits allocated can be controlled by the second input argument $b$. The input size is padded upwards so it is a 

986 multiple of \textbf{MP\_PREC} plus an additional \textbf{MP\_PREC} digits. This padding is used to prevent trivial 

987 allocations from becoming a bottleneck in the rest of the algorithms. 

988 

989 Like algorithm mp\_init, the mp\_int structure is initialized to a default state representing the integer zero. This 

990 particular algorithm is useful if it is known ahead of time the approximate size of the input. If the approximation is 

991 correct no further memory reallocations are required to work with the mp\_int. 

992 

993 EXAM,bn_mp_init_size.c 

994 

995 The number of digits $b$ requested is padded (line @22,MP_[email protected]) by first augmenting it to the next multiple of 

996 \textbf{MP\_PREC} and then adding \textbf{MP\_PREC} to the result. If the memory can be successfully allocated the 

997 mp\_int is placed in a default state representing the integer zero. Otherwise, the error code \textbf{MP\_MEM} will be 

998 returned (line @27,[email protected]). 

999 
142

1000 The digits are allocated and set to zero at the same time with the calloc() function (line @25,[email protected]). The 
19

1001 \textbf{used} count is set to zero, the \textbf{alloc} count set to the padded digit count and the \textbf{sign} flag set 

1002 to \textbf{MP\_ZPOS} to achieve a default valid mp\_int state (lines @29,[email protected], @30,[email protected] and @31,[email protected]). If the function 

1003 returns succesfully then it is correct to assume that the mp\_int structure is in a valid state for the remainder of the 

1004 functions to work with. 

1005 

1006 \subsection{Multiple Integer Initializations and Clearings} 

1007 Occasionally a function will require a series of mp\_int data types to be made available simultaneously. 

1008 The purpose of algorithm mp\_init\_multi is to initialize a variable length array of mp\_int structures in a single 

1009 statement. It is essentially a shortcut to multiple initializations. 

1010 

1011 \newpage\begin{figure}[here] 

1012 \begin{center} 

1013 \begin{tabular}{l} 

1014 \hline Algorithm \textbf{mp\_init\_multi}. \\ 

1015 \textbf{Input}. Variable length array $V_k$ of mp\_int variables of length $k$. \\ 

1016 \textbf{Output}. The array is initialized such that each mp\_int of $V_k$ is ready to use. \\ 

1017 \hline \\ 

1018 1. for $n$ from 0 to $k  1$ do \\ 

1019 \hspace{+3mm}1.1. Initialize the mp\_int $V_n$ (\textit{mp\_init}) \\ 

1020 \hspace{+3mm}1.2. If initialization failed then do \\ 

1021 \hspace{+6mm}1.2.1. for $j$ from $0$ to $n$ do \\ 

1022 \hspace{+9mm}1.2.1.1. Free the mp\_int $V_j$ (\textit{mp\_clear}) \\ 

1023 \hspace{+6mm}1.2.2. Return(\textit{MP\_MEM}) \\ 

1024 2. Return(\textit{MP\_OKAY}) \\ 

1025 \hline 

1026 \end{tabular} 

1027 \end{center} 

1028 \caption{Algorithm mp\_init\_multi} 

1029 \end{figure} 

1030 

1031 \textbf{Algorithm mp\_init\_multi.} 

1032 The algorithm will initialize the array of mp\_int variables one at a time. If a runtime error has been detected 

1033 (\textit{step 1.2}) all of the previously initialized variables are cleared. The goal is an ``all or nothing'' 

1034 initialization which allows for quick recovery from runtime errors. 

1035 

1036 EXAM,bn_mp_init_multi.c 

1037 

1038 This function intializes a variable length list of mp\_int structure pointers. However, instead of having the mp\_int 

1039 structures in an actual C array they are simply passed as arguments to the function. This function makes use of the 

1040 ``...'' argument syntax of the C programming language. The list is terminated with a final \textbf{NULL} argument 

1041 appended on the right. 

1042 

1043 The function uses the ``stdarg.h'' \textit{va} functions to step portably through the arguments to the function. A count 

1044 $n$ of succesfully initialized mp\_int structures is maintained (line @47,[email protected]) such that if a failure does occur, 

1045 the algorithm can backtrack and free the previously initialized structures (lines @27,[email protected] to @46,}@). 

1046 

1047 

1048 \subsection{Clamping Excess Digits} 

1049 When a function anticipates a result will be $n$ digits it is simpler to assume this is true within the body of 

1050 the function instead of checking during the computation. For example, a multiplication of a $i$ digit number by a 

1051 $j$ digit produces a result of at most $i + j$ digits. It is entirely possible that the result is $i + j  1$ 

1052 though, with no final carry into the last position. However, suppose the destination had to be first expanded 

1053 (\textit{via mp\_grow}) to accomodate $i + j  1$ digits than further expanded to accomodate the final carry. 

1054 That would be a considerable waste of time since heap operations are relatively slow. 

1055 

1056 The ideal solution is to always assume the result is $i + j$ and fix up the \textbf{used} count after the function 

1057 terminates. This way a single heap operation (\textit{at most}) is required. However, if the result was not checked 

1058 there would be an excess high order zero digit. 

1059 

1060 For example, suppose the product of two integers was $x_n = (0x_{n1}x_{n2}...x_0)_{\beta}$. The leading zero digit 

1061 will not contribute to the precision of the result. In fact, through subsequent operations more leading zero digits would 

1062 accumulate to the point the size of the integer would be prohibitive. As a result even though the precision is very 

1063 low the representation is excessively large. 

1064 

1065 The mp\_clamp algorithm is designed to solve this very problem. It will trim highorder zeros by decrementing the 

1066 \textbf{used} count until a nonzero most significant digit is found. Also in this system, zero is considered to be a 

1067 positive number which means that if the \textbf{used} count is decremented to zero, the sign must be set to 

1068 \textbf{MP\_ZPOS}. 

1069 

1070 \begin{figure}[here] 

1071 \begin{center} 

1072 \begin{tabular}{l} 

1073 \hline Algorithm \textbf{mp\_clamp}. \\ 

1074 \textbf{Input}. An mp\_int $a$ \\ 

1075 \textbf{Output}. Any excess leading zero digits of $a$ are removed \\ 

1076 \hline \\ 

1077 1. while $a.used > 0$ and $a_{a.used  1} = 0$ do \\ 

1078 \hspace{+3mm}1.1 $a.used \leftarrow a.used  1$ \\ 

1079 2. if $a.used = 0$ then do \\ 

1080 \hspace{+3mm}2.1 $a.sign \leftarrow MP\_ZPOS$ \\ 

1081 \hline \\ 

1082 \end{tabular} 

1083 \end{center} 

1084 \caption{Algorithm mp\_clamp} 

1085 \end{figure} 

1086 

1087 \textbf{Algorithm mp\_clamp.} 

1088 As can be expected this algorithm is very simple. The loop on step one is expected to iterate only once or twice at 

1089 the most. For example, this will happen in cases where there is not a carry to fill the last position. Step two fixes the sign for 

1090 when all of the digits are zero to ensure that the mp\_int is valid at all times. 

1091 

1092 EXAM,bn_mp_clamp.c 

1093 

1094 Note on line @27,[email protected] how to test for the \textbf{used} count is made on the left of the \&\& operator. In the C programming 

1095 language the terms to \&\& are evaluated left to right with a boolean shortcircuit if any condition fails. This is 

1096 important since if the \textbf{used} is zero the test on the right would fetch below the array. That is obviously 

1097 undesirable. The parenthesis on line @28,a>[email protected] is used to make sure the \textbf{used} count is decremented and not 

1098 the pointer ``a''. 

1099 

1100 \section*{Exercises} 

1101 \begin{tabular}{cl} 

1102 $\left [ 1 \right ]$ & Discuss the relevance of the \textbf{used} member of the mp\_int structure. \\ 

1103 & \\ 

1104 $\left [ 1 \right ]$ & Discuss the consequences of not using padding when performing allocations. \\ 

1105 & \\ 

1106 $\left [ 2 \right ]$ & Estimate an ideal value for \textbf{MP\_PREC} when performing 1024bit RSA \\ 

1107 & encryption when $\beta = 2^{28}$. \\ 

1108 & \\ 

1109 $\left [ 1 \right ]$ & Discuss the relevance of the algorithm mp\_clamp. What does it prevent? \\ 

1110 & \\ 

1111 $\left [ 1 \right ]$ & Give an example of when the algorithm mp\_init\_copy might be useful. \\ 

1112 & \\ 

1113 \end{tabular} 

1114 

1115 

1116 %%% 

1117 % CHAPTER FOUR 

1118 %%% 

1119 

1120 \chapter{Basic Operations} 

1121 

1122 \section{Introduction} 

1123 In the previous chapter a series of low level algorithms were established that dealt with initializing and maintaining 

1124 mp\_int structures. This chapter will discuss another set of seemingly nonalgebraic algorithms which will form the low 

1125 level basis of the entire library. While these algorithm are relatively trivial it is important to understand how they 

1126 work before proceeding since these algorithms will be used almost intrinsically in the following chapters. 

1127 

1128 The algorithms in this chapter deal primarily with more ``programmer'' related tasks such as creating copies of 

1129 mp\_int structures, assigning small values to mp\_int structures and comparisons of the values mp\_int structures 

1130 represent. 

1131 

1132 \section{Assigning Values to mp\_int Structures} 

1133 \subsection{Copying an mp\_int} 

1134 Assigning the value that a given mp\_int structure represents to another mp\_int structure shall be known as making 

1135 a copy for the purposes of this text. The copy of the mp\_int will be a separate entity that represents the same 

1136 value as the mp\_int it was copied from. The mp\_copy algorithm provides this functionality. 

1137 

1138 \newpage\begin{figure}[here] 

1139 \begin{center} 

1140 \begin{tabular}{l} 

1141 \hline Algorithm \textbf{mp\_copy}. \\ 

1142 \textbf{Input}. An mp\_int $a$ and $b$. \\ 

1143 \textbf{Output}. Store a copy of $a$ in $b$. \\ 

1144 \hline \\ 

1145 1. If $b.alloc < a.used$ then grow $b$ to $a.used$ digits. (\textit{mp\_grow}) \\ 

1146 2. for $n$ from 0 to $a.used  1$ do \\ 

1147 \hspace{3mm}2.1 $b_{n} \leftarrow a_{n}$ \\ 

1148 3. for $n$ from $a.used$ to $b.used  1$ do \\ 

1149 \hspace{3mm}3.1 $b_{n} \leftarrow 0$ \\ 

1150 4. $b.used \leftarrow a.used$ \\ 

1151 5. $b.sign \leftarrow a.sign$ \\ 

1152 6. return(\textit{MP\_OKAY}) \\ 

1153 \hline 

1154 \end{tabular} 

1155 \end{center} 

1156 \caption{Algorithm mp\_copy} 

1157 \end{figure} 

1158 

1159 \textbf{Algorithm mp\_copy.} 

1160 This algorithm copies the mp\_int $a$ such that upon succesful termination of the algorithm the mp\_int $b$ will 

1161 represent the same integer as the mp\_int $a$. The mp\_int $b$ shall be a complete and distinct copy of the 

1162 mp\_int $a$ meaing that the mp\_int $a$ can be modified and it shall not affect the value of the mp\_int $b$. 

1163 

1164 If $b$ does not have enough room for the digits of $a$ it must first have its precision augmented via the mp\_grow 

1165 algorithm. The digits of $a$ are copied over the digits of $b$ and any excess digits of $b$ are set to zero (step two 

1166 and three). The \textbf{used} and \textbf{sign} members of $a$ are finally copied over the respective members of 

1167 $b$. 

1168 

1169 \textbf{Remark.} This algorithm also introduces a new idiosyncrasy that will be used throughout the rest of the 

1170 text. The error return codes of other algorithms are not explicitly checked in the pseudocode presented. For example, in 

1171 step one of the mp\_copy algorithm the return of mp\_grow is not explicitly checked to ensure it succeeded. Text space is 

1172 limited so it is assumed that if a algorithm fails it will clear all temporarily allocated mp\_ints and return 

1173 the error code itself. However, the C code presented will demonstrate all of the error handling logic required to 

1174 implement the pseudocode. 

1175 

1176 EXAM,bn_mp_copy.c 

1177 

1178 Occasionally a dependent algorithm may copy an mp\_int effectively into itself such as when the input and output 

1179 mp\_int structures passed to a function are one and the same. For this case it is optimal to return immediately without 

1180 copying digits (line @24,a == [email protected]). 

1181 

1182 The mp\_int $b$ must have enough digits to accomodate the used digits of the mp\_int $a$. If $b.alloc$ is less than 

1183 $a.used$ the algorithm mp\_grow is used to augment the precision of $b$ (lines @29,[email protected] to @33,}@). In order to 

1184 simplify the inner loop that copies the digits from $a$ to $b$, two aliases $tmpa$ and $tmpb$ point directly at the digits 

1185 of the mp\_ints $a$ and $b$ respectively. These aliases (lines @42,[email protected] and @45,[email protected]) allow the compiler to access the digits without first dereferencing the 

1186 mp\_int pointers and then subsequently the pointer to the digits. 

1187 

1188 After the aliases are established the digits from $a$ are copied into $b$ (lines @48,[email protected] to @50,}@) and then the excess 

1189 digits of $b$ are set to zero (lines @53,[email protected] to @55,}@). Both ``for'' loops make use of the pointer aliases and in 

1190 fact the alias for $b$ is carried through into the second ``for'' loop to clear the excess digits. This optimization 

1191 allows the alias to stay in a machine register fairly easy between the two loops. 

1192 

1193 \textbf{Remarks.} The use of pointer aliases is an implementation methodology first introduced in this function that will 

1194 be used considerably in other functions. Technically, a pointer alias is simply a short hand alias used to lower the 

1195 number of pointer dereferencing operations required to access data. For example, a for loop may resemble 

1196 

1197 \begin{alltt} 

1198 for (x = 0; x < 100; x++) \{ 

1199 a>num[4]>dp[x] = 0; 

1200 \} 

1201 \end{alltt} 

1202 

1203 This could be rewritten using aliases as 

1204 

1205 \begin{alltt} 

1206 mp_digit *tmpa; 

1207 a = a>num[4]>dp; 

1208 for (x = 0; x < 100; x++) \{ 

1209 *a++ = 0; 

1210 \} 

1211 \end{alltt} 

1212 

1213 In this case an alias is used to access the 

1214 array of digits within an mp\_int structure directly. It may seem that a pointer alias is strictly not required 

1215 as a compiler may optimize out the redundant pointer operations. However, there are two dominant reasons to use aliases. 

1216 

1217 The first reason is that most compilers will not effectively optimize pointer arithmetic. For example, some optimizations 

1218 may work for the Microsoft Visual C++ compiler (MSVC) and not for the GNU C Compiler (GCC). Also some optimizations may 

1219 work for GCC and not MSVC. As such it is ideal to find a common ground for as many compilers as possible. Pointer 

1220 aliases optimize the code considerably before the compiler even reads the source code which means the end compiled code 

1221 stands a better chance of being faster. 

1222 

1223 The second reason is that pointer aliases often can make an algorithm simpler to read. Consider the first ``for'' 

1224 loop of the function mp\_copy() rewritten to not use pointer aliases. 

1225 

1226 \begin{alltt} 

1227 /* copy all the digits */ 

1228 for (n = 0; n < a>used; n++) \{ 

1229 b>dp[n] = a>dp[n]; 

1230 \} 

1231 \end{alltt} 

1232 

1233 Whether this code is harder to read depends strongly on the individual. However, it is quantifiably slightly more 

1234 complicated as there are four variables within the statement instead of just two. 

1235 

1236 \subsubsection{Nested Statements} 

1237 Another commonly used technique in the source routines is that certain sections of code are nested. This is used in 

1238 particular with the pointer aliases to highlight code phases. For example, a Comba multiplier (discussed in chapter six) 

1239 will typically have three different phases. First the temporaries are initialized, then the columns calculated and 

1240 finally the carries are propagated. In this example the middle column production phase will typically be nested as it 

1241 uses temporary variables and aliases the most. 

1242 

1243 The nesting also simplies the source code as variables that are nested are only valid for their scope. As a result 

1244 the various temporary variables required do not propagate into other sections of code. 

1245 

1246 

1247 \subsection{Creating a Clone} 

1248 Another common operation is to make a local temporary copy of an mp\_int argument. To initialize an mp\_int 

1249 and then copy another existing mp\_int into the newly intialized mp\_int will be known as creating a clone. This is 

1250 useful within functions that need to modify an argument but do not wish to actually modify the original copy. The 

1251 mp\_init\_copy algorithm has been designed to help perform this task. 

1252 

1253 \begin{figure}[here] 

1254 \begin{center} 

1255 \begin{tabular}{l} 

1256 \hline Algorithm \textbf{mp\_init\_copy}. \\ 

1257 \textbf{Input}. An mp\_int $a$ and $b$\\ 

1258 \textbf{Output}. $a$ is initialized to be a copy of $b$. \\ 

1259 \hline \\ 

1260 1. Init $a$. (\textit{mp\_init}) \\ 

1261 2. Copy $b$ to $a$. (\textit{mp\_copy}) \\ 

1262 3. Return the status of the copy operation. \\ 

1263 \hline 

1264 \end{tabular} 

1265 \end{center} 

1266 \caption{Algorithm mp\_init\_copy} 

1267 \end{figure} 

1268 

1269 \textbf{Algorithm mp\_init\_copy.} 

1270 This algorithm will initialize an mp\_int variable and copy another previously initialized mp\_int variable into it. As 

1271 such this algorithm will perform two operations in one step. 

1272 

1273 EXAM,bn_mp_init_copy.c 

1274 

1275 This will initialize \textbf{a} and make it a verbatim copy of the contents of \textbf{b}. Note that 

1276 \textbf{a} will have its own memory allocated which means that \textbf{b} may be cleared after the call 

1277 and \textbf{a} will be left intact. 

1278 

1279 \section{Zeroing an Integer} 

1280 Reseting an mp\_int to the default state is a common step in many algorithms. The mp\_zero algorithm will be the algorithm used to 

1281 perform this task. 

1282 

1283 \begin{figure}[here] 

1284 \begin{center} 

1285 \begin{tabular}{l} 

1286 \hline Algorithm \textbf{mp\_zero}. \\ 

1287 \textbf{Input}. An mp\_int $a$ \\ 

1288 \textbf{Output}. Zero the contents of $a$ \\ 

1289 \hline \\ 

1290 1. $a.used \leftarrow 0$ \\ 

1291 2. $a.sign \leftarrow$ MP\_ZPOS \\ 

1292 3. for $n$ from 0 to $a.alloc  1$ do \\ 

1293 \hspace{3mm}3.1 $a_n \leftarrow 0$ \\ 

1294 \hline 

1295 \end{tabular} 

1296 \end{center} 

1297 \caption{Algorithm mp\_zero} 

1298 \end{figure} 

1299 

1300 \textbf{Algorithm mp\_zero.} 

1301 This algorithm simply resets a mp\_int to the default state. 

1302 

1303 EXAM,bn_mp_zero.c 

1304 

1305 After the function is completed, all of the digits are zeroed, the \textbf{used} count is zeroed and the 

1306 \textbf{sign} variable is set to \textbf{MP\_ZPOS}. 

1307 

1308 \section{Sign Manipulation} 

1309 \subsection{Absolute Value} 

1310 With the mp\_int representation of an integer, calculating the absolute value is trivial. The mp\_abs algorithm will compute 

1311 the absolute value of an mp\_int. 

1312 

1313 \newpage\begin{figure}[here] 

1314 \begin{center} 

1315 \begin{tabular}{l} 

1316 \hline Algorithm \textbf{mp\_abs}. \\ 

1317 \textbf{Input}. An mp\_int $a$ \\ 

1318 \textbf{Output}. Computes $b = \vert a \vert$ \\ 

1319 \hline \\ 

1320 1. Copy $a$ to $b$. (\textit{mp\_copy}) \\ 

1321 2. If the copy failed return(\textit{MP\_MEM}). \\ 

1322 3. $b.sign \leftarrow MP\_ZPOS$ \\ 

1323 4. Return(\textit{MP\_OKAY}) \\ 

1324 \hline 

1325 \end{tabular} 

1326 \end{center} 

1327 \caption{Algorithm mp\_abs} 

1328 \end{figure} 

1329 

1330 \textbf{Algorithm mp\_abs.} 

1331 This algorithm computes the absolute of an mp\_int input. First it copies $a$ over $b$. This is an example of an 

1332 algorithm where the check in mp\_copy that determines if the source and destination are equal proves useful. This allows, 

1333 for instance, the developer to pass the same mp\_int as the source and destination to this function without addition 

1334 logic to handle it. 

1335 

1336 EXAM,bn_mp_abs.c 

1337 

1338 \subsection{Integer Negation} 

1339 With the mp\_int representation of an integer, calculating the negation is also trivial. The mp\_neg algorithm will compute 

1340 the negative of an mp\_int input. 

1341 

1342 \begin{figure}[here] 

1343 \begin{center} 

1344 \begin{tabular}{l} 

1345 \hline Algorithm \textbf{mp\_neg}. \\ 

1346 \textbf{Input}. An mp\_int $a$ \\ 

1347 \textbf{Output}. Computes $b = a$ \\ 

1348 \hline \\ 

1349 1. Copy $a$ to $b$. (\textit{mp\_copy}) \\ 

1350 2. If the copy failed return(\textit{MP\_MEM}). \\ 

1351 3. If $a.used = 0$ then return(\textit{MP\_OKAY}). \\ 

1352 4. If $a.sign = MP\_ZPOS$ then do \\ 

1353 \hspace{3mm}4.1 $b.sign = MP\_NEG$. \\ 

1354 5. else do \\ 

1355 \hspace{3mm}5.1 $b.sign = MP\_ZPOS$. \\ 

1356 6. Return(\textit{MP\_OKAY}) \\ 

1357 \hline 

1358 \end{tabular} 

1359 \end{center} 

1360 \caption{Algorithm mp\_neg} 

1361 \end{figure} 

1362 

1363 \textbf{Algorithm mp\_neg.} 

1364 This algorithm computes the negation of an input. First it copies $a$ over $b$. If $a$ has no used digits then 

1365 the algorithm returns immediately. Otherwise it flips the sign flag and stores the result in $b$. Note that if 

1366 $a$ had no digits then it must be positive by definition. Had step three been omitted then the algorithm would return 

1367 zero as negative. 

1368 

1369 EXAM,bn_mp_neg.c 

1370 

1371 \section{Small Constants} 

1372 \subsection{Setting Small Constants} 

1373 Often a mp\_int must be set to a relatively small value such as $1$ or $2$. For these cases the mp\_set algorithm is useful. 

1374 

1375 \begin{figure}[here] 

1376 \begin{center} 

1377 \begin{tabular}{l} 

1378 \hline Algorithm \textbf{mp\_set}. \\ 

1379 \textbf{Input}. An mp\_int $a$ and a digit $b$ \\ 

1380 \textbf{Output}. Make $a$ equivalent to $b$ \\ 

1381 \hline \\ 

1382 1. Zero $a$ (\textit{mp\_zero}). \\ 

1383 2. $a_0 \leftarrow b \mbox{ (mod }\beta\mbox{)}$ \\ 

1384 3. $a.used \leftarrow \left \lbrace \begin{array}{ll} 

1385 1 & \mbox{if }a_0 > 0 \\ 

1386 0 & \mbox{if }a_0 = 0 

1387 \end{array} \right .$ \\ 

1388 \hline 

1389 \end{tabular} 

1390 \end{center} 

1391 \caption{Algorithm mp\_set} 

1392 \end{figure} 

1393 

1394 \textbf{Algorithm mp\_set.} 

1395 This algorithm sets a mp\_int to a small single digit value. Step number 1 ensures that the integer is reset to the default state. The 

1396 single digit is set (\textit{modulo $\beta$}) and the \textbf{used} count is adjusted accordingly. 

1397 

1398 EXAM,bn_mp_set.c 

1399 

1400 Line @21,mp_[email protected] calls mp\_zero() to clear the mp\_int and reset the sign. Line @22,MP_[email protected] copies the digit 

1401 into the least significant location. Note the usage of a new constant \textbf{MP\_MASK}. This constant is used to quickly 

1402 reduce an integer modulo $\beta$. Since $\beta$ is of the form $2^k$ for any suitable $k$ it suffices to perform a binary AND with 

1403 $MP\_MASK = 2^k  1$ to perform the reduction. Finally line @23,a>[email protected] will set the \textbf{used} member with respect to the 

1404 digit actually set. This function will always make the integer positive. 
