Continuous learning
  • My continuous learning
  • Algorithms
    • Big O notation
    • Binary Search
    • Bloom filters
    • Heap vs Stack
    • HyperLogLog
    • MapReduce
  • Architecture
    • Distributed architectures
    • Event-Driven architectures
    • Kubernetes architectures
    • Micro-service architectures
    • Multi-cluster architectures
    • OpenStack architectures
    • SDN architectures
    • Storage architectures
    • Video streaming architectures
  • Book Reviews
    • 97 things every SRE should know
    • Antifragility: Things That Gain from Disorder
    • Atomic Habits
    • The Black Swan: The Impact of the Highly Improbable
    • The Culture Map
    • The First 90 Days
    • Fooled by Randomness
    • The Phoenix Project
    • The Unicorn Project
    • The Three-Body Problem
  • Engineering
    • Problem Solving
  • Mind Maps
  • Miscellaneous
    • Building a modern development environment
    • Complexity
    • Conway’s law
    • Feynman technique
    • Food as a reflection of a culture
    • Leadership
    • Leading a team
    • Memory Chunking
    • Rules for life
    • Software architecture
    • Moral of understanding what you are doing
    • UX
  • Projects
    • Blue-Green Deployments with Argo Rollouts
    • Canary Deployments with Argo Rollouts and Istio
  • Reading material sources
  • Tech Stacks
    • Chaos
    • Kubernetes
      • kubectl
      • Kubernetes deep dive
      • Managing Kubernetes Clusters
      • Multi Cluster deployments
      • Topology awareness
      • Cert manager with let's encrypt
      • Harbor
      • Inspektor Gadget
      • Komodor
      • Kubershark
      • kubevirt
      • Kyverno
      • Let's encrypt
      • Mailhog
      • MetalLB
      • OpenShift
      • Robusta
      • ingress
        • Nginx Ingress
    • Home Lab
    • SRE
    • FaaS
      • Knative
    • FaaS
      • OpenFaaS
    • automation
      • CD
      • Argo Events
      • Workflows
      • Dagger
      • Gitea
      • GitHub
      • GitLab
        • GitLab image mapping
        • Deploying GitLab in multiple clusters
      • Pipeline definitions
        • Test multiple python versions for a release
      • Pulumi
      • stack
        • Full platform stack
      • Terraform
    • cloud-providers
      • AWS
      • Fly.io
    • databases
      • Atlas
      • Postgres
        • Postgres for Sysadmins
      • Redis
      • Vault
    • development
      • GraphQL
      • Development experience for the next century
      • UX
        • devcontainer
      • Using code server as a service
      • Go
      • nim
      • Python
        • Making Python Fast
        • Poetry
        • Python Zero Copy
      • Rust
      • UX
        • Skaffold
      • UX
        • Telepresence
      • UX
        • tilt
          • Tilt
    • linux
      • LXC
    • management
      • Backstage
      • Crossplane
    • monitoring
      • Grafana
      • Loki
      • OpenTelemetry
      • Prometheus
      • Spawn a full monitoring stack
      • Tempo
      • Victoriametrics
    • network
      • Calico
      • external Nginx for kubernetes ingress
    • os
      • mac
        • Configure MacOS
    • scm
      • Git
        • hooks
          • Pre-commit hook
    • security
      • CodeQL
    • service-mesh
      • Cilium service mesh
      • Consul
      • istio
        • Istio from the ground up
        • Istio Monitoring
        • Ambient mesh
        • Istio Sidecar Mode
      • Jaeger
      • LinkerD
    • storage
      • Ceph
      • MinIO
    • testing
      • k6
Powered by GitBook
On this page
  1. Algorithms

Big O notation

Big O Notation (or the Big O) is used to describe how long and complex an operation will be based on its input.

Complexity could mean that an operation takes N amount of time, or N amount of memory, N CPU resources, etc.

There are some notations to describe this:

  • O(n) -> The complexity grows linearly based on the size of the input.

  • O(n^2) -> Grows at a square ratio of its input.

  • O(n^3) -> Grows at a cube ratio of its input.

  • O(n^4) -> And so on.

In the previous notations, the complexity always grow at a minimum rate of O(n) and grows higher and higher, but what if the complexity grows at a lower rate than that?

This is where logarithm notations can help describe the complexity.

But first, what is logarithm or log?

A logarithm is the exponent on which a number is raised, for example:

b^p = n
2^3 = 2x2x2
2^3 = 8

In this case, p is the logarithm

Another example:

log(10)^10,000 = x
10^x = 10,000
10^4 = 10,000
log(10)^10,000 = 4

Now that we know that the log is just an exponent to raise a base (p) we can say that:

  • O(log(n)) -> grows at a logarithmic rate based on its input.

complexity described in O(log(n)) is used to define “efficient” algorithms.

For example, this binary search example:

def binary_search(sorted_list, value):
    low = 0
    high = len(sorted_list) - 1
    while low <= high:
        mid = (low + high) // 2
        if sorted_list[mid] == value:
            return mid
        elif value < sorted_list[mid]:
            high = mid - 1
        else:
            low = mid + 1
    return -1

On each iteration of this loop the input size is halved, which means that the exponent or p of this function is O(log2(n))

O(log(2)^n)) = len(sorted_list)
O(log(2)^n)) = 1024
2^x = 1024
2^10 = 1024
log(2)^10) = 1024

Which means that we only need 10 iteratios to find the value in a list of 1024 elements.

Also,

  • O(log(log(n))) -> Grows at a double logarithm rate. or the complexity increases slower

  • O(log(log(log(n)))) -> and so on, similar to O(n^x).

PreviousAlgorithmsNextBinary Search

Last updated 2 years ago