You are on page 1of 20

More on Converting Numbers to

the Double-Base Number System

Valérie Berthé1 , Laurent Imbert1,2 , and Graham A. Jullien2


1
LIRMM, CNRS UMR 5506
161 rue Ada, 34392 Montpellier cedex 5, France
2
ATIPS, CISaC, University of Calgary
2500 University drive NW, Calgary, AB, T2N 1N4, Canada

Research Report LIRMM-04031

October 2004

Abstract

In this article, we present new results on the problem of converting a number x from
binary to the double-base number system (DBNS), i.e. as a sum of mixed powers of 2 and
3. A greedy algorithm was previously proposed which requires lookup tables in order to
find the largest number of the form 2a 3b less than or equal to x. We address this problem
by using results from the theory of continued fractions and diophantine approximation. An
approximation to the greedy algorithm is proposed for the conversion problem, and experi-
mental results demonstrate its efficiency. Although the material presented in this article is
mainly theoretical, the proposed algorithms could lead to very efficient implementations for
cryptographic or digital signal processing applications.

Keywords: Continued fractions, Diophantine approximation, Double-base number system

1
1 Introduction

The Double-Base number system (DBNS), introduced by V. Dimitrov and G. A. Jullien [1]

has advantages in many applications, such as cryptography [2] and digital signal processing [3].

Recently, in his Ph.D. dissertation [4], R. Muscedere proposed several hardware-based solutions

for the difficult operations in the Multi-Dimensional Logarithmic Number System (MDLNS),

which can be seen as a generalization of the DBNS. He addresses the problems of addition,

subtraction, and conversion from binary. Efficient methods have been proposed – using lookup-

tables with special range addressing schemes – for digital signal processing applications, where

the dynamic range of the numbers do not usually exceed 16-to-32 bits. However, such table-based

solutions become unrealistic to implement as the numbers grow in size, as with cryptographic

applications for example, and the techniques also seem to be quite difficult to generalize.

The main objective of this paper is to find a theoretical approaches to the problem of con-

verting a number from binary to DBNS. We tackle the problem using continued fractions,

Ostrowski’s number systems, and diophantine approximation.

In the second part of the paper, we propose an approximated algorithm for the conversion

from binary to DBNS which has many advantages over the classical approach.

In the Double-Base number system, we represent integers in the form


x= 2ai 3bi , (1)
i

where ai , bi are non-negative, independent integers. Following from B. M. M. de Weger’s defi-

nition of s-integers [5] – an integer is called s-integer if all of its prime divisors are among the

first s primes – we shall refer to numbers of the form 2a 3b as 2-integers in the rest of the paper.

This representation is clearly highly redundant. For every integer x, the representations with

the minimum number of digits, i.e., the minimum number of 2-integers (less than, or equal to x)

are called the canonic double-base number representations. For example, 127 has 783 different

representations, among which 3 are canonic (with only three 2-integers).

127 = 22 33 + 21 32 + 20 30 = 22 33 + 24 30 + 20 31 = 25 31 + 20 33 + 22 30 .

In the rest of the paper we shall refer to the size of a DBNS representation, by the number of

digits, i.e., the number of terms in (1), that are required to represent a given integer.

2
An elegant way to vizualise DBNS numbers is to use a two-dimensional array, with, for

example, columns representing powers of 2 and rows representing powers of 3. Each term in (1)

is represented by a colored cell. The three canonic representations of 127 are given in Figure 1.

1 2 4 8 1 2 4 8 16 1 2 4 8 16 32
1 1 1
3 3 3
9 9 9
27 27 27

Figure 1: The three canonic DBNS representations of 127

The problem of finding the canonic DBNS representation of a given integer is a difficult

problem. In [6], V. Dimitrov and G. A. Jullien proposed a greedy algorithm which provides the

so-called near-canonic double-base number representation (NCDBNR). In this article, and for

reasons we shall explain further, we use the terms full-greedy, or F-greedy, to refer to Algorithm 1
 
presented bellow. It has been proved that F-greedy terminates in O logloglogx x iterations.

Algorithm 1 F-greedy
Input : A positive integer x

Output : R, the sequence of exponents in the F-greedy DBNS representation of x

1: R←∅

2: while x > 0 do

3: Find s = 2a 3b , the largest 2-integer less than or equal to x

4: R ← R ∪ {a, b}

5: x← x−s

6: end while

In this paper we investigate the problem of finding the largest 2-integer less than or equal

to x, required at each iteration of F-greedy. For small size numbers, the use of lookup tables

seems to be the best approach. However, for larger integers, it becomes unrealistic and other

solutions are required. Although it is not a difficult computational problem, we shall see that

our solution is much more efficient than performing an exhaustive search.

Let us define the problem more precisely. We try to find two non-negative integers a, b such

that 2a 3b ≤ x, and among the solutions to this problem, 2a 3b is the largest possible value, or

3
equivalently
 
2a 3b = max 2c 3d , such that (c, d) ∈ N2 , and 2c 3d ≤ x . (2)

If we let a, b ∈ N be such that 2a 3b ≤ x, our problem can be reformulated as finding non-

negative integers a and b such that

a log 2 + b log 3 ≤ log x, (3)

and such that no other integers c, d ≥ 0 give a better left approximation to log x.

Let us define α = log3 2 and β = {log3 x} = log3 x − log3 x (β is the fractional part of

log3 x). Then we try to find the best left approximation to log3 x with non-negative integers. If

a, b are solutions to this problem, then, for all c, d ∈ N2 , with c = a, d = b, we have

c α + d < a α + b ≤ β + log3 x = log3 x. (4)

A graphical interpretation to this problem is to consider the line ∆ of equation v = −α u +

log3 x. The solutions are the points with integer coordinates, located in the area defined by

the line ∆ and the axes. The best solution corresponds to the point with the smallest vertical

distance1 to the line.

2 Continued fractions and Ostrowski’s number system

To solve the problem of finding the largest 2-integer ≤ x, we use results from the theory of

continued fractions and diophantine approximation. We introduce the necessary mathematical

background in this section.

A simple continued fraction is an expression of the form

1
α = a0 + ,
1
a1 +
1
a2 +
a3 + · · ·
where a0 = α and a1 , a2 , . . . are integers ≥ 1. The ai s are called the partial quotients. A

continued fraction is represented by the sequence (an )n∈N which can either be finite or infinite.
1
We can equivalently consider the horizontal distance

4
Every irrational real number α can be expressed uniquely as an infinite simple continued

fraction, written as a compact abbreviated notation as α = [a0 , a1 , a2 , a3 , . . . ]. Similarly, every

rational number can be expressed uniquely as a finite simple continued fraction. For example,

the infinite continued fraction expansions of the irrationals π, e and log3 2 (that we shall use in

the remainder of this paper) are

π = [3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, . . . ],

e = [2, 1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, . . . ],

log3 2 = [0, 1, 1, 1, 2, 2, 3, 1, 5, 2, 23, 3, . . . ].

The quantity obtained by restricting the continued fraction to its first n + 1 partial quotients

pn
= [a0 , a1 , a2 , . . . , an ]
qn

is called the nth convergent of α. The numbers pn and qn satisfy the recurrence

pn+1 = an+1 pn + pn−1 , qn+1 = an+1 qn + qn−1 , (5)

which make them easily computable starting with p−1 = 1, q−1 = 0, p0 = 0, q0 = 1. They satisfy

the important identity

pk+1 qk − pk qk+1 = (−1)k ,

and it is known that


pn
→α (n → ∞).
qn
Thus, the sequence of convergents of an infinite continued fraction can be used to define rational

approximations of an irrational number. For example, the first convergents of π are listed in

Table 1.

Partial quotients Convergents Value


[3] 3 3.000000000
22
[3, 7] 7 3.142857143
333
[3, 7, 15] 106 3.141509434
355
[3, 7, 15, 1] 113 3.141592920
103993
[3, 7, 15, 1, 292] 33102 3.141592653

Table 1: The first partial quotients and convergents of π

5
Ostrowski’s number system [7] is associated with the series (qn )n∈N of the denominators of

the convergents of the continued fraction expansion of an irrational number 0 < α < 1. The

following proposition holds.

Proposition 1 Every integer N can be written uniquely in the form



m
N= dk qk−1 , (6)
k=1

where 


 0 ≤ d1 ≤ a1 − 1, and 0 ≤ dk ≤ ak for k > 1,


 dk = 0 if dk+1 = ak+1 .

1+ 5
For example, if α = 2 = [1, 1, 1, 1, . . . ] is the golden section, (qn )n∈N is the series of

Fibonacci numbers, and the condition dk = 0 if dk+1 = ak+1 corresponds to the fact that we do

not have two consecutive ones in the corresponding Zeckendorf representation [8].

Ostrowski’s representation of integers can be extended to real numbers [9]. The base is given

by the sequence (θn )n∈N , where θn = (qn α − pn ). Note that the sign of θ n is equal to (−1)n .

Proposition 2 Every real number β such that −α ≤ β < 1 − α can be written uniquely on the

form
+∞

β= bk θk−1 , (7)
k=1
where 



 0 ≤ b1 ≤ a1 − 1, and 0 ≤ bk ≤ ak for k > 1,




 bk = 0 if bk+1 = ak+1 ,






 bk =
ak for infinitely many even and odd integers.

Prop. 2 can be used to approximate β modulo 1 (by only considering the fractional parts)

by numbers of the form N α. If we represent β using (7), the integers N which provide the best

approximations are given by the series



n
Nn = bk qk−1 . (8)
k=1

Equation (8) provides a series of best approximations to β on both sides; i.e., the numbers

Nn α can either be greater or smaller than β (modulo 1). If we are only interested in the best

left approximations to β, we must consider the base (|θn |)n∈N . The following proposition holds.

6
Proposition 3 Every real number β such that 0 ≤ β < 1, can be written uniquely on the form
+∞

β= ck |θk−1 |, (9)
k=1
where 



 0 ≤ ck ≤ ak for k > 1,




 ck+1 = 0 if ck = ak ,






 ck = ak for infinitely many even integers.

In this case, the sequence of best left approximations is more difficult to define due to the

alternating signs of (θn )n∈N . In the next section, we present an algorithm, inspired by [10],

which solves this problem.

3 Definition of the sequence of inhomogeneous best approxima-

tions of β

Let 0 < α < 1 such that α = [0, a1 , a2 , . . . ] (α is irrational), and 0 < β ≤ 1 be given. Inhomoge-

neous left approximations of β are numbers of the form kα + l less than or equal to β, with k, l

integers. It is clear that there are infinitely many such approximations. Here, we want to define

two increasing sequences of integers (kn )n∈N and (ln )n∈N , such that for all n ∈ N

0 < kn α − ln < kn+1 α − ln+1 < β,

and, furthermore, for all n, for all k < kn+1 , k = kn , and for all l ∈ Z such that 0 ≤ kα − l ≤ β,

then

0 < kα − l < kn α − ln < β.

For simplicity, we define for all n, fn = |θn |. We have f−1 = 1, f0 = α, f1 = 1 − a1 α, and

for all n > 1

fn+1 = fn−1 − an+1 fn . (10)

The sequence (fn + fn+1 )n∈N is decreasing, and since 0 < β ≤ 1, there exists a unique non-

negative integer n such that

fn + fn+1 < β ≤ fn + fn−1. (11)

Before we give the algorithm that defines the series of best left inhomogeneous approximations

of β, we prove two lemmas.

7
Lemma 1 Let 0 < β ≤ 1 and (fn )n∈N be defined as above. Then, there exists a unique non-

negative integer n, a unique non-negative integer c, and a unique real number e such that

β = cfn + fn+1 + e, (12)

with 0 < e ≤ fn , 1 ≤ c ≤ an+1 if n ≥ 1; and 1 ≤ c ≤ a1 − 1, if n = 0.

Proof: If n ≥ 1, then with fn + fn+1 < β ≤ fn + fn−1 , and (10), we have fn < β − fn+1 ≤

fn−1 + fn − fn+1 = (an+1 + 1)fn . If n = 0, then f0 + f1 < β ≤ 1 = f−1 = a1 f0 + f1 . We note

that a1 ≥ 2 in this case. 

Lemma 2 Let α be an irrational number such that 0 < α < 1 and α = [0, a1 , a2 , . . . ]. Also

define (pn )n∈N , (qn )n∈N , the sequences of the numerators and denominators of the convergents

of α. Let 0 < β ≤ 1 and (fn )n∈N be defined as above. n, c, e are the unique values satisfying (12).

We define the integers k, l by setting





k = q n , l = p n , if n is even;


k = −cqn + qn+1 , l = −cpn + pn+1 , if n is odd,

where c is the unique integer greater than 1 given by (12). Then we have 0 < β − (kα − l) < β.

Proof: Assume first that n is even. We have β − (kα − l) = β − fn , and thus 0 < β − (kα − l) < β.

Now, if n is odd, β − (kα − l) = β − [−c(qn α − pn ) + (qn+1 α − pn+1 )] = β − cfn + fn+1 = e, and

hence 0 < β − (kα − l) ≤ fn < β. 

Algorithm 2 computes the two sequences of non-negative integers (kn )n∈N , and (ln )n∈N such

that (kn α − ln ) corresponds to the inhomogeneous best left approximations to β.

This algorithm is inspired by [11]. For similar algorithms, see [12, 13, 14]. Note that

β − (ki+1 αli+1 ) is equal to ei if ni is odd, and to (c − 1)fni + fni +1 + ei , if ni is even. Hence, we

may have ni+1 = ni . This happens if and only if ni is even and ci > 1 ; this would then happen

(ci − 1) times before the sequence ni keeps growing to +∞.

Let us prove that Algorithm 2 does actually provide the best left approximations to β.

Proposition 4 Let 0 < α < 1 irrational such that α = [0, a1 , a2 , . . . ], and 0 < β ≤ 1 be

given. Let (pn /qn )n∈N be the sequence of the convergents of α. Then, the increasing sequences

of integers (ki )i∈N and (li )i∈N given by the previous algorithm satisfy, for all i ∈ N,

0 < ki α − li < ki+1 α − li+1 < β, (13)

8
Algorithm 2 Computes the sequence (kn α − ln )n∈N of inhomogeneous best left approximations

to β.
With (fn )n∈N defined as above, we start with k0 = 0, l0 = 0, and we inductively define, ni , ci ,

ei , ki , and li as follows: If

β − (ki α − li ) = ci fni + fni +1 + ei ,

with 0 < ei ≤ fni , 1 ≤ ci ≤ ani +1 , if ni > 0; and 1 ≤ ci ≤ a1 − 1, if ni = 0; then we define

ki+1 = ki + qni , li+1 = li + pni , if ni is even,

ki+1 = ki − ci qni + qni +1 , li+1 = li − ci pni + pni +1 , if ni is odd.

and furthermore, for all i, for all ki < k < ki+1 , and for all l ∈ Z, such that 0 ≤ kα − l ≤ β,

then

0 ≤ kα − l < ki α − li < β. (14)

Proof: From Lemma 2, we have for all i, 0 < ki α − li < β. We first prove (13) by considering the

cases ni even and ni odd separately. If ni is even, then β > ki+1 α−li+1 = (ki α−li )+qni α−pni =

(ki α − li ) + fni > (ki α − li ) > 0. We prove the case ni odd in a similar way. If ni is odd, then

β > ki+1 α − li+1 = (ki α − li ) − ci (qni α − pni ) + qni +1 α − pni +1 = (ki α − li ) + ci fni + fni +1 >

ki α − li > 0.

Let us now consider ki < k < ki+1 , and l ∈ Z such that 0 ≤ kα − l ≤ β, and let us try to

prove (14). By rewriting β − (kα − l), we have 0 ≤ β − (kα − l) = β − (ki α − li ) + (ki α − li −

ki+1 α + li+1 ) + (ki+1 α − li+1 − kα + l) ≤ β. What we prove in the next two cases that depend

on the parity of ni , is that β − (kα − l) > β − (ki α − li ).

• Let us first assume that ni is even. We have

β − (kα − l) = β − (ki α − li ) − fni + (ki+1 α − li+1 − kα + l).

Thus, what remains to be proved is that the last term (ki+1 α − li+1 − kα + l) is greater

than fni .

Since |ki+1 − k| < |ki+1 − ki | = qni , we have |(ki+1 α − li+1 − kα + l)| > fni . (See [15] for

a proof.)

9
To complete the proof, let us consider the sign of (ki+1 α − li+1 − kα + l). From (11) and

Algorithm 2, we know that |β − (ki α − li ) − fni | ≤ fni −1 .

– If ki+1 − k = qni −1 , then |(ki+1 α − li+1 − kα + l)| > fni −1 , and since 0 ≤ kα − l ≤ β,

then we have (ki+1 α − li+1 − kα + l) > 0.

– If ki+1 − k = qni −1 , since ni − 1 is odd, we have (ki+1 α − li+1 − kα + l) = (qni −1 α −

pni −1 ) = −fni −1 < 0. And we get β − (kα − l) < β − (ki α − li ) − fni − fni −1 < 0,

which contradicts our assumption.

• If we now assume that ni is odd, we have

β − (kα − l) = β − (ki α − li ) − (ci fni + fni +1 ) + (ki+1 α − li+1 − kα + l).

Here, what remains to be proved is that the last term (ki+1 α − li+1 − kα + l) is greater

than ci fni + fni +1 .

Since |ki+1 − k| < |ki+1 − ki | = qni +1 − ci qni , we have |(ki+1 α − li+1 − kα + l)| > fni +1 +

ci fni . Moreover, we also know from (12) and Algo. 2, that |β − (ki α − li ) − ci fni − fni +1 | ≤

fni . Thus, we have (ki+1 α − li+1 − kα + l) > 0.

Thus, in both cases we have β − (kα − l) > β − (ki α − li ). This concludes the proof. 

4 Explicit solution of the inhomogeneous approximation prob-

lem

As stated in the introduction, finding the largest 2-integer less than or equal to x is equivalent to

finding two non-negative integers a, b such that 2a 3b ≤ x and among the many possible solutions

to this problem 2a 3b takes the largest possible value.

Let a, b ∈ N be one of the solutions to the approximation problem, that is, such that 2a 3b ≤ x.

Clearly, we have

a log 2 + b log 3 ≤ log x.

If α = log3 (2) (note that α is irrational and 0 < α < 1), and β = {log3 (x)}, is the fractional

part of log3 (x) such that β = log3 (x) − log3 (x), then the problem reduces to finding the two

non-negative integers a, b such that

a α + b ≤ β + log3 (x).

10
We note that a ≤ log2 (x) and b ≤ log3 (x).

We are thus looking for (p, q) ∈ N2 such that


pα − q ≤ β,
pα − q = max(r,s)∈N2 {rα − s ; 0 ≤ rα − s ≤ β, r ≤ log2 (x), s ≤ log3 (x)} .

From p, q, we easily get the non-negative exponents a, b by setting a = p and b = log3 (x) − q.

Proposition 5 Let x ∈ N be given. Let α = log3 (2), (0 < α < 1 and α ∈ Q), β = {log3 (x)}.

Let n be such that kn ≤ log2 (x) < kn+1 . Let q = kn , p = ln . Then

pα − q = max {rα − s ; 0 ≤ rα − s ≤ β, r ≤ log2 (x), s ≤ log3 (x)}


(r,s)∈N2

If a = p, and b = log3 (x) − q, we get the expected result


 
2a 3b = max 2c 3d , such that (c, d) ∈ N2 , and 2c 3d ≤ x .

Proof: directly follows the proof of Prop. 4 in section 3.

5 Example

Let x = 23832098195. We want to define a, b ≥ 0 such that 2a 3b is the largest 2-integer less than

or equal to x. Let α = log3 (2) = 0.6309. We have β = {log3 (x)} = 0.7495, log3 (x) = 21. We

set k0 = 0, l0 = 0. The partial quotients and the corresponding convergents in the continued

fraction expansion of α are given in Table 2. Table 3 gives the first best inhomogeneous left

i 0 1 2 3 4 5 6
ai 0 1 1 1 2 2 3
pi 0 1 1 2 5 12 41
qi 1 1 2 3 8 19 65
fi = |qi α − pi | 0.630930 0.369070 0.261860 0.107211 0.047438 0.012335 0.010434

Table 2: Partial quotients of the continued fraction expansion of α = log3 (2), and the corre-

sponding sequences (pi )i≥0 , (qi )i≥0 , and (|qi α − pi |)i≥0

approximations to β. The stop condition is reached at iteration 4 because we get a negative

ternary exponent (21 − 39 = −18). We thus retain the 3rd approxqimation which gives a = 17

and b = 21 − 10 = 11. In order to find the DBNS representation of x, we then apply the

11
i β − (ki α − li ) ni ci ei ki+1 li+1
0 0.7495 1 1 0.1186 1 0
1 0.1186 4 2 0.0114 9 5
2 0.0712 4 1 0.0114 17 10
3 0.0237 5 1 0.0009 63 39

Table 3: Best left approximations of β = 0.7495 with numbers of the form kα − l

same algorithm with the value x − 217 311 = 613086611. For completeness, we give the DBNS

representation of x provided by the F-greedy algorithm.

x = 217 311 + 27 314 + 27 38 + 22 38 + 29 30 + 22 31 + 20 31 .

6 A-greedy: An approximate greedy algorithm

The greedy algorithm is not optimal, in the sense that it does not necessarily produce a canonic

DBNS representation. However, within this algorithm, the step which consists in finding the

largest 2-integer less than equal to the input x, is optimal.2 It seems then natural to investigate

the potential advantages of an approximate greedy algorithm, where we only perform a few

iterations of Algorithm 2 to define a 2-integer, although not the largest, less than or equal to x.

A-greedy is presented in Algorithm 3. It returns R, a DBNS representation of x, where only

d iterations of Algorithm 2 are performed at each step to define a ”good” 2-integer less than or

equal to x. In the following, we shall refer to d as the depth of search, or simply the depth. In

Algorithm 3 A-greedy
Input : Two positive integers x, d

Output : R, the sequence of exponents in the A-greedy DBNS representation of x

1: R←∅

2: while x > 0 do

3: Apply d iterations of Algorithm 2 to find s = 2a 3b ≤ x

4: R ← R ∪ {a, b}

5: x← x−s

6: end while
2
We have proved in the previous sections that Algorithm 2 returns the best left approximation to β.

12
the example presented in section 5, the successive best left approximations to x = 23832098195

are 21 321 , 29 36 , 217 311 . Figure 2 shows the DBNS representations obtained for x when A-greedy

is applied at depth 1,2, and 3. As expected, we remark that the number of digits increases as

0 2
0

0 10
0
0 17
0

21 16 14

(a) depth 1 (b) depth 2 (c) depth 3

Figure 2: DBNS representations of x = 23832098195 given by A-greedy at depth 1,2, and 3

respectively. Note that depth 3 is equivalent to the full-greedy solution in this case

the depth decreases; we get 12 digits at depth 1, 8 digits at depth 2, and 7 digits at depth 3

(note that depth 3 is equivalent to the full-greedy solution in this case). Also, the largest binary

exponent is smaller for smaller depth.

These results were to be expected. However, the following questions seems difficult to answeer

in the general case: By using the A-greedy algorithm at a given depth d, how many digits are

required to represent x compared to the solution provided by F-greedy? What decrease (resp.

increase) can we expect on the binary (resp. ternary) exponents? In order to answer these

questions, we have implemented the approximate greedy algorithm at different depths, for a

thousand randomly chosen numbers of sizes ranging from 64 to 512 bits. The results are shown

in Figures 3 to 6. Our experiments are summarized in Table 4. We first remark that for all

our tests (up to 512-bit integers), depth 5 is equivalent to the F-greedy algorithm. Which means

that we must perform 6 iterations of Algorithm 2 to get an approximation which corresponds

to a negative ternary exponent; i.e., the stop condition. By using the A-greedy algotithm, we

perform at most d iterations. It is possible that the stop condition occures before we reach the

desired depth, or would occur at the (d + 1)th iterations. Only in those two cases, we get the

13
full-greedy full-greedy
16 dist. to greedy at depth 2 16 dist. to greedy at depth 3

14 14

12 12
dist. to greedy / # of digits

dist. to greedy / # of digits


10 10

8 8

6 6

4 4

2 2

0 0

-2 -2

-4 -4
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
# of experiments # of experiments

(a) depth 2 (b) depth 3

Figure 3: Number of digits obtained with the full-greedy (upper curves) and its difference with

the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 64-bits numbers

26 full-greedy 26 full-greedy
dist. to greedy at depth 3 dist. to greedy at depth 4
24 24

22 22

20 20
18
18
dist. to greedy / # of digits

dist. to greedy / # of digits

16
16
14
14
12
12
10
10
8
8
6
6
4
4 2
2 0
0 -2
-2 -4
-4 -6
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
# of experiments # of experiments

(a) depth 3 (b) depth 4

Figure 4: Number of digits obtained with the full-greedy (upper curves) and its difference with

the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 128-bits numbers

Size of x Approx # digits Avg. dist. to F-greedy at depth d


(in bits) with F-greedy d=1 d=2 d=3 d=4 d=5
64 12 9.08 2.35 0.47 0 -
128 20 19.76 5.8 1.4 0.02 0
256 35 - 14.5 4.6 0.5 0
512 62 - - 12.14 2.2 0

Table 4: Average distance to the full-greedy solution at various depths. Note that depth 5 is

equivalent to the full-greedy algorithm in all our experiments

14
44 full-greedy 44 full-greedy
dist. to greedy at depth 3 dist. to greedy at depth 4
40 40

36 36

32 32
dist. to greedy / # of digits

dist. to greedy / # of digits


28 28

24
24
20
20
16
16
12
12
8
8
4
4
0
0
-4
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
# of experiments # of experiments

(a) depth 3 (b) depth 4

Figure 5: Number of digits obtained with the full-greedy (upper curves) and its difference with

the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 256-bits numbers

full-greedy full-greedy
72 dist. to greedy at depth 3 72 dist. to greedy at depth 4

64 64

56 56
dist. to greedy / # of digits

dist. to greedy / # of digits

48
48

40
40

32
32

24
24

16
16
8
8
0
0
-8
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
# of experiments # of experiments

(a) depth 3 (b) depth 4

Figure 6: Number of digits obtained with the full-greedy (upper curves) and its difference with

the approximated versions at depth 2,3, and 4, for 1000 randomly chosen 512-bits numbers

15
same result as the F-greedy algorithm, i.e. the largest 2-integer less than or equal to x. It is

also interesting to remark that at medium depth (d = 3 or 4), the size of the representations

can be very close to the F-greedy ones, and in some cases even better, i.e., with fewer digits.

Depending on the application, we might not need the F-greedy DBNS representation, but

rather satisfy with an approximated one, with slightly more digits, especially if the time required

for the conversion drops down. Another consequence of using A-greedy is that the binary

exponent is likely to be smaller at small depths than with F-greedy. It is clear that the ternary

exponent increases at the same time, but not as fast as the binary exponent decreases (the ratio

is α 0.63).

As an example of such application for which the A-greedy approach can be advantageous,

we have been investigating an exponentiation algorithm for cryptographic applications3 , whose

complexity depends on the DBNS representation of some quantity e. The complexity of our

algorithm (the number of elementary operations) is equal to the largest binary exponent plus

the size of the DBNS representation of e. The largest ternary exponent only influences the

precomputations and memory requirements. In Table 5, we give a few examples, which clearly

show the gain we get with the A-greedy compared to the F-greedy solution for this specific

cryptographic application. We can remark that the loss in the precomputation step, directly

influenced by the largest ternary exponent, is very small.

7 Discussions

7.1 Performance comparisons

A straightforward approach to the problem of finding the largest 2-integer less than or equal to x

consists in computing the distance between the line ∆ of equation v = −αu+β for every integers

u from 0 to log2 (x), and to keep the values (u, v) which lead to the smallest horizontal (or

vertical) distance, i.e., the smallest fractional part of β − αu for all integers 0 ≤ u ≤ log2 (x).4

In Fig. 7, we have plotted the line ∆ of equation v = −0.6309u + 0.7495 which corresponds

to the previous example, together with the points we have to scan if we perform the exhaustive
3
We do not give any details about this algorithm in this article, since this research is currently under progress
and a journal paper is being prepared.
4
More efficiently, we can consider the line ∆ : w = − log2 (3)u + log2 (x), and keep the minimum distance
among all integers 0 ≤ u ≤ log 3 (x), simply because the function log3 (t) grows faster than log2 (t).

16
# digits Max bin Max tern # Operations Gain
F-greedy 35 192 134 227
A-greedy (d = 3) 38 75 154 113 114
F-greedy 36 231 137 267
A-greedy (d = 3) 36 132 154 168 99
F-greedy 33 199 151 232
A-greedy (d = 3) 40 75 151 115 117
F-greedy 35 227 147 262
A-greedy (d = 2) 48 67 148 115 147
F-greedy 33 189 145 222
A-greedy (d = 2) 46 70 152 116 106
F-greedy 35 170 149 205
A-greedy (d = 2) 54 65 159 119 86

Table 5: Gain achieved with the A-greedy compared to the F-greedy for specific cryptographic

application using 256-bit numbers

search as explained in the previous paragraph, and those we deduce from Algorithm 2. We

clearly remark that the algorithm based on continued fractions and Ostrowski’s number system

we have presented in section 3 only scans four possible solutions, whereas the straightforward

algorithm must scan all the points on a discrete line under ∆.

For large values of x, Algorithm 2 is much faster than the exhaustive search. Our ex-

periments, on thousands of randomly chosen numbers, show a speedup of about 35%. The

approximate version presented in section 6 provides even better improvements.

7.2 Precomputations

Continued fraction expansions are generally expensive to compute. Hopefully, in our case,

α = log3 2 only depends on the DBNS’ parameters, i.e., the bases 2 and 3. The partial quotients,

ai , the convergents, pi , qi , and the sequence (fi )i = (|qi α − pi |)i can thus be precomputed and

stored in lookup tables. The number of elements we have to precompute depends on the size of

the input value x we want to convert. Assume that x is an n-bit integer, i.e., 2n ≤ x < 2n+1 .

Since the binary exponent, a, satisfies 0 ≤ a ≤ log2 x, we have 0 ≤ a < n + 1. Our algorithm

returns the digits of a in the Ostrowski representation. Thus a = m i=1 ci qi−1 < n + 1. Prop. 1

ensures the uniqueness of Ostrowski’s representation. Thus, m, the maximum index is the value

i such that qi < n + 1 ≤ qi+1 . For example, if 264 ≤ x < 265 , since q6 = 65, we only need the

17
22
(1,21)
(0,21)

(9,16)
16

(17,11)
11

0
0 1 9 17 35

Figure 7: Graphical interpretation of the problem of finding the largest 2-integer less than (or

equal) to x = 23832098195 and the points scanned using both the straightforward approach and

the proposed algorithm

denominators q1 to q5 to represent a. If m is the maximum required index, we thus need to

precompute the sequences pn and qn up to qm+1 . (We recall that when ni is odd, Algorithm 2

performs ki+1 = ki − ci qn1 + qni +1 .) Table 6 gives the number of elements of the sequences ai ,

pi , qi , fi that have to be stored based on the size of the input number x. The conclusion is that

Size of x (in bits) # Elements


8 to 18 5
19 to 64 6
65 to 83 7
84 to 483 8
484 to 1053 9

Table 6: Number of elements of the sequences ai , pi , qi , and fi that must be precomputed based

on the size of the input number x

for most applications, both the F-greedy and A-greedy algorithms can be implemented at a very

low memory cost.

18
8 Conclusions

Using theoretical results from diophantine approximation, we proposed an efficient solution to

the problem of findind the largest 2-integer less than or equal to a positive integer x, when

lookup tables cannot be used. This operation is required at each step of a greedy algorithm

which converts binary numbers into DBNS. An approximated version of the greedy algorithm,

called A-greedy, is also presented, which only looks for a ”not-too-bad” 2-integer less than x

at each step. Experimental results illustrate the efficiency and potential advantages of our

approach for specific applications where large numbers are required such as in cryptography.

Others applications, for example in digital signal processing, can undoubtedly take advantages

of this approximated greedy solution. This study will be investigated further to answer some

more difficult questions related to the double-base number system and its generalization, the

multi-dimensional logarithmic number system.

References

[1] V. S. Dimitrov, G. A. Jullien, and W. C. Miller. Theory and applications of the double-base

number system. IEEE Transactions on Computers, 48(10):1098–1106, October 1999.

[2] V. S. Dimitrov and T. V. Cooklev. Two algorithms for modular exponentiation based on

nonstandard arithmetics. IEICE Transactions on Fundamentals of Electronics, Communi-

cations and Computer Science, E78-A(1):82–87, January 1995. Special issue on cryptogra-

phy and information security.

[3] V. S. Dimitrov, J. Eskritt, L. Imbert, G. A. Jullien, and W. C. Miller. The use of the multi-

dimensional logarithmic number system in DSP applications. In 15th IEEE symposium on

Computer Arithmetic, pages 247–254, Vail, CO, USA, June 2001.

[4] R. Muscedere. Difficult Operations in the Multi-Dimensional Logarithmic Number System.

PhD thesis, University of Windsor, Windsor, ON, Canada, 2003.

[5] B. M. M. de Weger. Algorithms for Diophantine equations, volume 65 of CWI Tracts.

Centrum voor Wiskunde en Informatica, Amsterdam, 1989.

19
[6] V. S. Dimitrov, G. A. Jullien, and W. C. Miller. An algorithm for modular exponentitation.

Information Processing Letters, 66(3):155–159, May 1998.

[7] J.-P. Allouche and J. Shallit. Automatic Sequences. Cambridge University Press, 2003.

[8] E. Zeckendorf. Représentation des nombres naturels par une somme de nombres de Fi-

bonacci ou de nombres de Lucas. Fibonacci Quarterly, 10:179–182, 1972.

[9] V. Berthé. Autour du système de numération d’Ostrowski. Bulletin of the Belgian Mathe-

matical Society, 8:209–239, 2001.

[10] V. Berthé, N. Chekhova, and S. Ferenczi. Covering numbers: Arithmetics and dynamics

for rotations and internal exchanges. Journal d’Analyse Mathématique, 79:1–31, 1999.

[11] N. B. Slater. Gaps and steps for the sequence nθ mod 1. Mathematical Proceedings of the

Cambridge Philosophical Society, 63:1115–1123, 1967.

[12] V. T. Sós. On the theory of diophantine approximations. I. Acta Mathematica Hungarica,

8:461–472, 1957.

[13] V. T. Sós. On the theory of diophantine approximations. II. Acta Mathematica Hungarica,

9:229–241, 1958.

[14] V. T. Sós. On a problem of Hartman about normal forms. Colloquium Mathematicum,

7:155–160, 1960.

[15] J. W. S. Callels. An Introduction to Diophantine Approximation. Cambridge University

Press, 1957.

20

You might also like