About the LCM by GCD product

Let $a$ e $b$ be two non zero natural numbers; it is known that their product equals their GCD by LCM product, that is:

$a \cdot b = LCM(a,b) \cdot GCD(n,m)$ (1)

This formula is very useful in informatics since easy algorithms exist to calculate GCD: LCM between two numbers can therefore be calculated by calculating GCD and then applying (1).

Equation (1) can be proved in many ways; proofs can be easily found on the web (here are a couple of proofs from me, for example).

A less known more general form of the (1) is the following. Let a1,…,aN be non zero natural numbers; it’s true that:

$\prod_{i=1}^{N}a_{i} = LCM(a_{1},…,a_{N})\cdot GCD(\frac{1}{a_{1}}\prod_{i=1}^{N}a^{i}, \cdot …, \cdot \frac{1}{a_{N}}\prod_{i=1}^{N}a^{i})$ (2)

which turns out to be the (1), when we put $N=2$.

Here’s a proof of (2), which is simple provided that we calculate GCD in a way which is slightly different from traditional one (obviously equivalent).

Let $a_{1},…,a_{N}$ be N non zero natural numbers; each of them can be factorized in primes number in a unique way, some of the factors could be shared among two or more than two natural numbers, some other could appear in one of the numbers factorization only (the latters are usually referred as “not shared” factors). Let now $F = f_{1},…,f_{K}$ be the set of all the prime factors appearing in all $a_{1},…,a_{N}$ factorizations. Both shared and unique factors belong to $F$. Each of the $a_{1},…,a_{N}$ factorizes as follows:

$a_{i} = \prod_{j=1}^{K}f_{j}^{e_{ij}}, i = 1, …, N$ (3)

where some of the exponents can be zero (if that specific factor does not appear in some of the $a_{i}$ factorizations). From (3), GCD and LCM can easily be defined as follows:

$LCM(a_{1}, …, a_{N}) = \prod_{j=1}^{K}f_{j}^{\max_{i=1,…,N}(e_{ij})}$ (4)

$GCD(a_{1}, …, a_{N}) = \prod_{j=1}^{K}f_{j}^{min_{i=1,…,N}(e_{ij})}$ (5)

It’s worthwhile to notice that, if any of the $f_{j}$ did not appear in all $a_{i}$ factorizations, its minimum exponent would be 0 (zero); therefore that factor would not appear in the GCD factorization, as expected (indeed using the traditional algorithm we would consider only those factors which appear in all $a_{i}$ factorizations).

Using (4) and (5) to prove (1) is trivial. Let’s focus on the proof of (2).

First, let’s calculate the GCD at right side of (2) by using (5), that is:

$GCD(\frac{1}{a_{1}}\prod_{i=1}^{N}a_{i}, …, \frac{1}{a_{N}}\prod_{i=1}^{N}a_{i})$ (6)

In other words, we’re trying to evaluate GCD of N numbers, each of them being the product of N-1 elements among $a_{i}$. The m-th of these numbers can be factorized as follows:

$\frac{1}{a_{m}}\prod_{i=1}^{N}a^{i} = \prod_{j=1}^{K}f_{j}^{\sum_{i=1}^{N}e_{ij}-e_{mj}}$ (7)

As to calculate GCD the minimum exponent must be selected, from (7) it follows that the GCD factorization j-th factor’s $f_{j}$ exponent will be $\sum_{i=1}^{N}e_{ij}-\max_{i=1,…,N}(e_{ij})$. Hence:

$GCD(\frac{1}{a_{1}}\prod_{i=1}^{N}a_{i}, …, \frac{1}{a_{N}}\prod_{i=1}^{N}a_{i})=\prod_{j=1}^{K}f_{j}^{\sum_{i=1}^{N}e_{ij}-\max_{i=1,…,N}(e_{ij})}$ (8)

Finally, applying powers properties at the left side of (2) and putting (4) and (8) at right side of (2), we get:

$\prod_{j=1}^{K}f_{j}^{\sum_{i=1}^{N}e_{ij}}=\prod_{j=1}^{K}f_{j}^{\max_{i=1,…,N}(e_{ij})} \cdot \prod_{j=1}^{K}f_{j}^{\sum_{i=1}^{N}e_{ij}-\max_{i=1,…,N}(e_{ij})}$ (9)

which is obviously true.

A powerful symmetry formula for definite integrals calculation

Let $f(x)$ be an even function, integrable in $\left [ 0,\alpha \right ]$ and $g(x)$ another function such that $g(-x)=\frac{1}{g(x)}$, $g(x)\neq0$, $g(x)\neq-1$ in $\left [ -\alpha,\alpha \right ]$.

It can be proved that:

$$\int_{-\alpha }^{\alpha }\frac{f(x)}{1+g(x)}dx=\int_{0}^{\alpha}f(x)dx (1)$$

It turns out to be an interesting formula in the sense that it allows to simplify and speed up some integrals calculations.

Let’s see some examples.

1. Let $f(x)=x^2$ and $g(x)=e^x$. In this case (1) becomes $$\int_{-1}^{1}\frac{x^2}{1+e^x}dx=\int_{0}^{1}x^2dx=\left [ \frac{1}{3}x^3 \right ]_{0}^{1}=\frac{1}{3}$$

Fig.1 – Example 1: red area is equivalent to blue area (both are $1/3$)

2. Let now $f(x)=x^2+cosx$ and $g(x)=e^{xcosx}$. We have$$\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{x^2+cosx}{1+e^{xcosx}}dx=\int_{0}^{\pi/2}(x^2+cosx)dx=\left [ \frac{1}{3}x^3 +sinx\right ]_{0}^{\pi/2}=\frac{\pi}{24}+1$$

Fig. 2 – Example 2: red area is equivalent to blue area (both are $\pi/24 + 1$)

3. Finally, let $f(x)=x^2$ and $g(x)=(1+x^2)^x$. Formula (1) becomes$$\int_{-1}^{1}\frac{x^2}{1+(1+x^2)^{x}}dx=\int_{0}^{1}x^2dx=\left [ \frac{1}{3}x^3\right ]_{0}^{1}=\frac{1}{3}$$

Fig.2 – Example 3: red area is equivalent to blue area (both are $1/3$)


It’s a straightforward calculation.

$$\int_{-\alpha }^{\alpha }\frac{f(x)}{1+g(x)}dx=\int_{-\alpha }^{0}\frac{f(x)}{1+g(x)}dx+\int_{0}^{\alpha }\frac{f(x)}{1+g(x)}dx=$$

$$=\int_{-\alpha }^{0}\frac{f(x)\frac{1}{g(x)}}{1+\frac{1}{g(x)}}dx+\int_{0}^{\alpha}\frac{f(x)}{1+g(x)}dx=\int_{-\alpha }^{0}\frac{f(x)(\frac{1}{g(x)}+1-1)}{1+\frac{1}{g(x)}}dx+\int_{0}^{\alpha}\frac{f(x)}{1+g(x)}dx=$$

$$=\int_{-\alpha }^{0}\left [f(x)-\frac{f(x)}{1+\frac{1}{g(x)}} \right ]dx+\int_{0}^{\alpha}\frac{f(x)}{1+g(x)}dx=\int_{-\alpha }^{0}f(x)dx-\int_{-\alpha }^{0}\frac{f(x)}{1+\frac{1}{g(x)}}dx+\int_{0}^{\alpha}\frac{f(x)}{1+g(x)}dx=$$

applying the variable change $t=-x$ ($dt=-dx$) in the second integral and using the fact that $f(x)$ is even in the first integral,


and the formula is proven.

Why Reflexivity cannot be deduced by Symmetry and Transitivity?

The short answer is: quantifiers matter, they’re very important.

The question was asked me by students who were starting studying equivalence relations. They wondered: why refelxivity is needed? Can’t it be deduced by Symmetry and Transitivity?

Let’s recall the definition of equivalence relation.

Given two sets $A$ e $B$, a relation is any subset of the cartesian product $A\times B$. A relation can also be defined between elements of one set $A$: in this case the relation is a subset of the cartesian product $A\times A$ ($A^{2}$).

A relation $R$ in $A\times A$ is said to be an equivalence if it is Reflexive, Symmetric e Transitive.

Reflexivity (M): $\forall x\in A, (x,x)\in R$

Symmetry (S): $(x,y)\in R \Rightarrow (y,x)\in R$

Transitivity (T): $(x,y)\in R \wedge (y,z)\in R\Rightarrow (x,z)\in R$

The reasoning made by students was like the following: S $\Rightarrow (x,y)\in R\Rightarrow (y,x)\in R$, hence, through T, $(x,x)\in R$, which looks like reflexivity M.

Looks like… The keypoint is that neither S nor T guarentee that all the elements of A are linked through R to at least one element of A: universal quantifier “for all” ($\forall$) is missed. Universal quantifier which is instead included inside reflexivity M.

In other words, reflexivity M – exactly thanks to the universal quantifier “for all” ($\forall$) – is the only property among the three which guarentees no element of A is left outside the relation R; what S and T cannot do.

It’s useful to recall that an equivalence relation on a set A naturally builds some subsets – the so called equivalence classes – which are a partition of A. So, using the definition of partition, no element of A can be excluded by the partition, that is, all elements of A must stay in one of the partition subsets.

Now, an example. The relation defined as follows:

$R:\left \{(m,n)\in \mathbb{Z}^{2} : mn>0\right \}$

is obviously symmetric and transitive; moreover almost all the elements of $\mathbb{Z}$ are linked to themselves through $R$. By the way the relation is not reflexive since $0\in\mathbb{Z}$ is not linked to itself through $R$ (indeed it’s not linked to any other element of $\mathbb{Z}$).

Should you have any comments, doubts or questions, feel free to leave them in the comments section. Thank you. See you soon.


In this brief article, we’ll see how to find the equivalent resistance of the following net:

Let’s call REQ the equivalent resistance we’re requested to evaluate.

If we cut the net as indicated by the dotted arrow in the picture below, we can observe that the equivalent resistance of the part of the net on the right of the dotted line is again REQ

So, we can write:


$R_{EQ}=aR+\frac{bR\cdot R_{EQ}}{bR+{R_{EQ}}}$,

$(bR+R_{EQ})R_{EQ}=aR(bR+R_{EQ})+bRR_{EQ}$ (*)

Going on with some algebra and solving the quadratic equation, we have:

$R_{EQ}=\frac{aR}{2}(1+\sqrt{1+\frac{4b}{a}})$ (1)

Let’s have now a glance to few special cases.

  1. $a=0$. In this case, all the $aR$ resistances are short circuits, that means that the overall resistance will be zero. Indeed, if we put $a=0$ in (*) we get $R_{EQ}=0$.
  2. $b=0$. In this case, all the $bR$ resistances will be 0, therefore all the resistors on the right of the first $bR$ will be shorted. Hence, the overall equivalent resistance will be only $aR$. Indeed, if we put $b=0$ in (1), we get $R_{EQ}=aR$.
  3. Finally, let’s consider the case $a=b\neq0$. If we substitute in (1), we quickly have:

$R_{EQ}=\frac{1+\sqrt{5}}{2}R \cong 1.618R$ (2)

Many of you should have noticed the “golden number” ($\frac{1+\sqrt{5}}{2}$) in (2). It seems it’s hidden everywhere…

Let’s see another way to get the same result as (2), which should provide another reason why the “golden number” is appearing in the equivalent resistance expression.

Let’s build the equivalent resistance step by step. Let’s recall we’re now considering the net made by all equale resistors ($R$)

In the first step, let’s calculate the equivalent resistance as if the net was made of the first 2 resistors only. It is:


In the second step, let’s add one couple of resistors (one series and one parallel). We have:


In the third step, let’s add one more couple of resistors (one series and one parallel). We have:


Going on few steps, we’ll get the following sequence:

$\left \{R_{EQ,n}\right \}=\left \{ \frac{2}{1}R, \frac{5}{3}R, \frac{13}{8}R, \frac{34}{21}R, … \right \}$

This should remind us the Fibonacci’s sequence:

$f_{n}=\left \{ 1,1,2,3,5,8,13,21,34,55,… \right \}$

In fact, we have

$\left \{R_{EQ,n}\right \}=\left \{ \frac{f_{2n+1}}{f_{2n}}R \right \}$ (3)

Now, since the sequence (3) is a sub-sequence of $\left \{ \frac{f_{n+1}}{f_{n}}R \right \}$ (i.e. the sequence of ratios of 2 adjacent Fibonacci’s sequence terms) whose limit is the golden number, we have:


which is the same result as (2).

GCD and LCM: a very interesting relationship!

Given two natural numbers n and m, we have: $\mathbf{n\cdot m = GCD(n,m)\cdot LCM(n,m)}$ (1)

It’s a very interesting and useful relationship: for example several algorithms to calculate GCD of two numbers exist, so that using (1) an algorithm to calculate LCM can be implemented:

$LCM(n,m) = \frac{n\cdot m}{GCD(n,m)}$ (2)

From (1) it follows also:

$GCD(n,m) = \frac{n\cdot m}{LCM(n,m)}$ (3)


1. Let’s consider numbers 3 and 4, co-prime. We have: $LCM(3,4)=12$; $GCD(3,4)=1$ so $3\cdot 4=LCM(3,4)\cdot GCD(3,4)=12\cdot 1$

2. Let’s consider now the numbers 12 and 90. We have: $LCM(12,90)=180$; $GCD(12,90)=6$ so $12\cdot 90=LCM(12,90)\cdot GCD(12,90)=180\cdot 6$

3. Finally let’s considers the numbers 24 and 30. Decomposing them in prime factors: $24=2^{3}\cdot 3$ and $30=2\cdot 3\cdot 5$. Recall that to evaluate the LCM we must multiply all factors taken once with the maximum exponent: $LCM(24,30)=2^{MAX(3,1)}\cdot 3\cdot 5= 2^{3}\cdot 3\cdot 5= 120$. GCD instead is calculated by multiplying only those factors which are shared by the two numbers taken once with the minimum exponent: $GCD(24,30)=2^{min(3,1)}\cdot 3=2^{1}\cdot 3=6 $. Hence: $LCM(24,30)\cdot GCD(24,30)=\left [ 2^{MAX(3,1)}\cdot 3\cdot 5\right ]\cdot \left [ 2^{min(3,1)}\cdot 3 \right ]= \left [ 2^{3}\cdot 3\cdot 5 \right ]\cdot \left [ 2^{1}\cdot 3 \right ]=$

$=(2^{3}\cdot 3)\cdot (2\cdot 3\cdot 5)=24\cdot 30=720$.


I’ll provide two different proofs.

Proof 1

Let n and m be two natural numbers. Their prime factorization can be conveniently written as follows:

$n=c_{n_1}^{e_{n_1}}\cdot c_{n_2}^{e_{n_2}}\cdot … \cdot c_{n_k}^{e_{n_k}}\cdot u_{n_1}^{e’_{n_1}}\cdot u_{n_2}^{e’_{n_2}}\cdot … \cdot u_{n_i}^{e’_{n_i}}=\prod_{\alpha =1}^{k}c_{n_\alpha }^{e_{n_{\alpha }}}\cdot \prod_{\beta =1}^{i}u_{n_\beta }^{e’_{n_{\beta }}}$ (4)

$m=c_{m_1}^{e_{m_1}}\cdot c_{m_2}^{e_{m_2}}\cdot … \cdot c_{m_k}^{e_{m_k}} \cdot u_{m_1}^{e’_{m_1}}\cdot u_{m_2}^{e’_{m_2}}\cdot … \cdot u_{m_j}^{e’_{m_j}} =\prod_{\alpha =1}^{k}c_{m_\alpha }^{e_{m_{\alpha }}}\cdot \prod_{\gamma =1}^{i}u_{m_\gamma }^{e’_{m_{\gamma }}} $ (5)

in which c factors (Common) are the ones which appear in both n and m factorizations (so they are the same number, k, with same exponents), whereas u factors (Uncommon) are unique for each number.

Proceeding as in the example 3, we have:

$GCD(n,m)= \prod_{\alpha =1}^{k}c_{n_\alpha }^{min(e_{n_{\alpha }}, e_{m_{\alpha }})}$ (6)

$LCM(n,m)= \prod_{\alpha =1}^{k}c_{n_\alpha }^{MAX(e_{n_{\alpha }}, e_{m_{\alpha }})} \cdot \prod_{\beta =1}^{i}u_{n_\beta }^{e’_{n_{\beta }}} \cdot \prod_{\gamma =1}^{i}u_{m_\gamma }^{e’_{m_{\gamma }}} $ (7)

Multiplying equations (6) and (7) we get:

$LCM(n,m)\cdot GCD(n,m)= \prod_{\alpha =1}^{k}c_{n_\alpha }^{ min(e_{n_{\alpha }}, e_{m_{\alpha }})+MAX(e_{n_{\alpha }}, e_{m_{\alpha }})} \cdot \prod_{\beta =1}^{i}u_{n_\beta }^{e’_{n_{\beta }}} \cdot \prod_{\gamma =1}^{i}u_{m_\gamma }^{e’_{m_{\gamma }}} $ (8)

Moreover, since we’re dealing with only two numbers,

$ min(e_{n_{\alpha }}, e_{m_{\alpha }})+MAX(e_{n_{\alpha }}, e_{m_{\alpha }})= e_{n_{\alpha }} + e_{m_{\alpha }}, \forall \alpha \epsilon \left \{1,…,k \right\} $ (9)

From (8) and (9) it follows that:

$LCM(n,m)\cdot GCD(n,m)= \prod_{\alpha =1}^{k}c_{n_\alpha }^{ e_{n_{\alpha }}+e_{m_{\alpha }}} \cdot \prod_{\beta =1}^{i}u_{n_\beta }^{e’_{n_{\beta }}} \cdot \prod_{\gamma =1}^{i}u_{m_\gamma }^{e’_{m_{\gamma }}} = $

$= \prod_{\alpha =1}^{k}c_{n_\alpha }^{e_{n_{\alpha }}}\cdot \prod_{\beta =1}^{i}u_{n_\beta }^{e’_{n_{\beta }}} \cdot \prod_{\alpha =1}^{k}c_{m_\alpha }^{e_{m_{\alpha }}}\cdot \prod_{\gamma =1}^{i}u_{m_\gamma }^{e’_{m_{\gamma }}} = n \cdot m$ (10),

and the statement is proven.

Proof 2

Let again n and m be two natural numbers, D be GCD(n,m) and G be LCM(n,m).

We can rewrite (1) as $n\cdot m=G\cdot D$.

Proving (1) equals to prove that:

(H) $D=MCD(n,m)\Rightarrow $(T) $G=\frac{nm}{D}=mcm(n,m)$ (11)

Proving that G – as defined by the second side of (11) – is the LCM(n,m) equals to prove the two following statements:

(a) G is multiple of n and m

(b) There’s no other multiple of n and m which is less than G

To prove (a), since D is a divisor of both n and m, we have:

$\exists k\epsilon N_0:n=kD$ (12)

$\exists h\epsilon N_0:m=hD $ (13)

from (12) we get $G=\frac{n\cdot m}{D}=\frac{kDm}{D}=km\Rightarrow\textbf{G is multiple of m}$

similarly, from (13) we get $G=\frac{n\cdot m}{D}=\frac{nhD}{D}=hn\Rightarrow\textbf{G is multiple of n}$,

To proof (b), let’s suppose (reductio ad absurdum) that $\exists G'<G : G’=h’n=k’m$, (so that k'<k e h'<h). Defining $D’=\frac{nm}{G’}$, we get:

$\frac{n}{D’}=\frac{G’}{m}=h’\Rightarrow$ D’ divides n

$\frac{m}{D’}=\frac{G’}{n}=k’\Rightarrow$ D’ divides m.

So D’ is a common divisor of n and m.

Furthermore, hypothesis G'<G $\Rightarrow D’=\frac{nm}{G’}>\frac{nm}{G}=D\Rightarrow D’>D \Rightarrow $ D is NOT the GCD, contraddicting initial hypothesis (H).

The contraddiction we got proves the thesis: there are no multiples of n and m less than G, so also (b) is proved.

So G is LCM(n,m).