/dev/null

unary vs binary | is n++ faster than n = n + 1

Spread the love

As a programmer you’ve certainly touched on the topic of unary and binary operations (except if you’re coding in python).

But which one is faster? Let’s take a look.

Typing Speed

This section might not be important for performance, but for speed of coding.

Typing n++ is a little faster than typing n = n + 1. But you certainly didn’t come here for the typing speed.

Compilated vs Interpreted

First we need to differentiate between compiled code and interpreted scripts.

Compilated

Because of the compiler optimization, the unary and binary operations turn out to be the equal assembly code:

    int binary(int i){
    	i = i + 1;
    	return i;
    }
     
    int unary(int i){
    	i++;
    	return i;
    }
    _binary:                                ## @binary
            .cfi_startproc
    ## BB#0:
            pushq   %rbp
    Ltmp0:
            .cfi_def_cfa_offset 16
    Ltmp1:
            .cfi_offset %rbp, -16 
            movq    %rsp, %rbp
    Ltmp2:
            .cfi_def_cfa_register %rbp
                                            ## kill: EDI<def> EDI<kill> RDI<def>
            leal    1(%rdi), %eax
            popq    %rbp
            retq
            .cfi_endproc
    _unary:                                 ## @unary
            .cfi_startproc
    ## BB#0:
            pushq   %rbp
    Ltmp3:
            .cfi_def_cfa_offset 16
    Ltmp4:
            .cfi_offset %rbp, -16 
            movq    %rsp, %rbp
    Ltmp5:
            .cfi_def_cfa_register %rbp
                                            ## kill: EDI<def> EDI<kill> RDI<def>
            leal    1(%rdi), %eax
            popq    %rbp
            retq
            .cfi_endproc

The generated code is identical, which means, that the performance will be equal.

The reason for this is the aforementioned compiler optimization. The compiler knows, that the instruction set has a faster way to do that operation (INC instead of ADD) and chooses that one.

Interpreted

Interpreted languages such as python, perl and bash (and maybe even Java) handle things differently.

In interpreted languages checking out the support sectionit’s always faster to use internal functionalities and libraries. If you do a binary n = n + 1 in such a language, it will perform at least one additional memory copy, which increases the overall performance.

With interpreted languages using JIT, you’re not going to achieve maximum efficiency before several execution.

The unary version in perl for example is faster than the binary one.

unary:

$ time perl -le '$n=0; foreach (1..100000000) { $n++ }'

real	0m2.389s

user	0m2.371s

sys	0m0.015s

binary:

$ time perl -le '$n=0; foreach (1..100000000) { $n=$n+1 }'

real	0m4.469s

user	0m4.442s

sys	0m0.020s

Compiled unary vs binary operation benchmark

If we compare n++ withn = n + 7, for example, we get something a little bit different:

        addl      $7, %edi                                      
        movl      %edi, %eax 

        incl      %edi                                          
        movl      %edi, %eax                                    

Which is different and therefore might have different performance results. So which one is faster?

Let’s take this code, which I stole from here and run it.

    #include <stdio.h>
    #include <sys/time.h>
    #include <stdint.h>
     
    int binary(int i){
    	i = i + 7;
    	return i;
    }
     
    int unary(int i){
    	i++;
    	return i;
    }
     
    int a;
    volatile uint64_t i, j;
     
    int main(){
    	struct timeval start, end;	
     
    	gettimeofday(&start, NULL);
    	for(i = 0; i<100; i++){
    		for (j=0; j<1000000; j++){
    			a = binary(a);
    		}
    	}
    	gettimeofday(&end, NULL);
     
    	fprintf(stdout, "Binary time in seconds:  %lf\n", 
    			(end.tv_sec - start.tv_sec) + 
    			(end.tv_usec - start.tv_usec)/1000000.0);
     
    	gettimeofday(&start, NULL);
    	for(i = 0; i<100; i++){
    		for (j=0; j<1000000; j++){
    			a = unary(a);
    		}
    	}
    	gettimeofday(&end, NULL);
     
    	fprintf(stdout, "Unary time in seconds:  %lf\n", 
    			(end.tv_sec - start.tv_sec) + 
    			(end.tv_usec - start.tv_usec)/1000000.0);
     
    	gettimeofday(&start, NULL);
    	for(i = 0; i<100; i++){
    		for (j=0; j<1000000; j++){
    		}
    	}
    	gettimeofday(&end, NULL);
     
    	fprintf(stdout, "Loop time in seconds:  %lf\n", 
    			(end.tv_sec - start.tv_sec) + 
    			(end.tv_usec - start.tv_usec)/1000000.0);
     
    	return !a;
    }

And the results with a gcc compiled variation:

Binary time in seconds:  0.232028
Unary time in seconds:  0.231783
Loop time in seconds:  0.231703

The loop time is just the loop without any operations, aka the baseline. And as you can see the Binary version takes longer as the unary one.

Binary time in seconds:  0.232028Unary time in seconds:  0.231783Loop time in seconds:  0.231703

Conclusion unary vs binary

It seems that while it might make no difference in most cases for compiled language, it does make a difference on interpreted ones. I suggest to generally use the unary operators, since it’s a) faster to write and b) most of the times performs faster.

However, there is a discussion that n++ is less readable than n = n + 1. Yeah, sure it’s more implicit using the unary operator, but I think at least since C++ was invented, everybody gets the “pun” and knows what n++ does.

If you liked this article, consider subscribing to my newsletter or checking out the support section. Have a nice day 🙂


Spread the love

1 thought on “unary vs binary | is n++ faster than n = n + 1

  1. Should prefer ++n over n++ due to pre-inc less complicated impl. especially if used on something more complex then a pod type. e.g. ++op(n){return n=n+1} or op++(n){let t=n;n=n+1;return t}

Leave a Reply

%d bloggers like this: