Visit the archive to see all 142 posts

Subscribe to my newsletter

Bi-weekly email to stay up to date with #elixir #nodejs #agile #testing #refactoring #cleancode

Below you can skim through featured articles I wrote over the years.

To see all 142 articles, head over to the archive

Elixir, node.js, crypto, testing, tutorials, thoughts and more.

Get distinct field names of sub documents in MongoDB

Found a typo? Edit this page on GitHub

Written on   2020-07-06

490 words - 3 minutes πŸ•œ

Let's say you have these documents in your collection items:

db.items.find()

{ "_id" : ObjectId("5f0345275663006139066197"), "subDocument" : { "field1" : 42 } }
{ "_id" : ObjectId("5f03452c5663006139066198"), "subDocument" : { "field3" : 6 } }
{ "_id" : ObjectId("5f0345275663006139066199"), "subDocument" : { "field1" : 6 } }

In other words the fields of the sub document subDocument are not the same. They could be user-defined, or simply because of the nature of the domain you're working in.


So, how would you get the distinct field name of those sub-documents?

As a results I would like to have an array containing the different field names.

db.items.aggregate({
  $project: {
    subDocument: {
      $objectToArray: "$subDocument"
    }
  }
}, {
  $unwind: '$subDocument'
}, {
  $project: {
    _id: '$subDocument.k'
  }
},
{
  $group: {
    _id: '$_id'
  }
})

{ "_id" : "field1" }
{ "_id" : "field3" }

Now you can just map each document and extract the _id field to have the distinct field names of all sub-documents.

Explanatation

$project with $objectToArray

$objectToArray comes in handy in this case to destructure the object into [key, value] pairs in the following format:

db.items.aggregate({$project: { subDocument: { $objectToArray: "$subDocument" } }})
{ "_id" : ObjectId("5f0345275663006139066197"), "subDocument" : [ { "k" : "field1", "v" : 42 } ] }
{ "_id" : ObjectId("5f03452c5663006139066198"), "subDocument" : [ { "k" : "field3", "v" : 6 } ] }
{ "_id" : ObjectId("5f0346c85663006139066199"), "subDocument" : [ { "k" : "field1", "v" : 6 } ] }

$unwind the subDocument array

We want to have objects to get the fields, so you unwind (kind of "unzip") the array in distinct objects.

db.items.aggregate({$project: { subDocument: { $objectToArray: "$subDocument" } }}, {$unwind: '$subDocument'})
{ "_id" : ObjectId("5f0345275663006139066197"), "subDocument" : { "k" : "field1", "v" : 42 } }
{ "_id" : ObjectId("5f03452c5663006139066198"), "subDocument" : { "k" : "field3", "v" : 6 } }
{ "_id" : ObjectId("5f0346c85663006139066199"), "subDocument" : { "k" : "field1", "v" : 6 } }

$project just the k field

We are interested in each k (key) field of the subDocuments (that now are objects, instead of arrays after the $unwind stage):

db.items.aggregate({$project: { subDocument: { $objectToArray: "$subDocument" } }}, {$unwind: '$subDocument'}, {$project: {_id: '$subDocument.k'}})
{ "_id" : "field1" }
{ "_id" : "field3" }
{ "_id" : "field1" }

$group by _id to get rid of duplicate fields

$group can be used similar to .distinct, but in an aggregation phase.

In this case, we want to "group" by the name of the fields, so that we have unique values:

db.items.aggregate({$project: { subDocument: { $objectToArray: "$subDocument" } }}, {$unwind: '$subDocument'}, {$project: {_id: '$subDocument.k'}}, {$group: {_id: '$_id'}})
{ "_id" : "field1" }
{ "_id" : "field3" }

Now you can map the _id fields and extract the values, to finally have the unique field names:

> groupedFields = db.items.aggregate({$project: { subDocument: { $objectToArray: "$subDocument" } }}, {$unwind: '$subDocument'}, {$project: {_id: '$subDocument.k'}}, {$group: {_id: '$_id'}}).toArray()
[ { "_id" : "field3" }, { "_id" : "field1" } ]
> groupedFields.map(function (g) {return  g._id})
[ "field3", "field1" ]

Aggregations with sub-documents in MongoDB

Found a typo? Edit this page on GitHub

Written on   2020-07-06

442 words - 3 minutes πŸ•œ

I would like to extract statistics about sub-documents in a collection.

E.g. in the form of count, sum and average for each field

Let's say you have the following documents in the items collection:

db.items.find()
{ "_id" : ObjectId("5f034ce90b15686f5d78baed"), "subDocument" : { "field1" : 42, "field3" : 10 } }
{ "_id" : ObjectId("5f034ce90b15686f5d78baee"), "subDocument" : { "field2" : 14, "field3" : 6 } }
{ "_id" : ObjectId("5f034ce90b15686f5d78baef"), "subDocument" : { "field1" : 6, "field4" : 11 } }
{ "_id" : ObjectId("5f034cea0b15686f5d78baf0"), "subDocument" : { "field3" : 3, "field4" : 26 } }

How would you solve the use-case of aggregating each field of the subDocument's dynamically?

Event without actually "knowing" which fields are contained in subDocument?


My approach is the following:

db.items.aggregate({
  $project: { subDocument: { $objectToArray: "$subDocument" } }
}, {
  $unwind: '$subDocument'
}, {
  $addFields: { 'type': {$type: '$subDocument.v'} }
}, {
  $match: { type: { $in: ['int', 'double', 'long', 'decimal'] } }
}, {
  $group: {
    _id: "$subDocument.k",
    count: {
      $sum: { $cond: [{ $ifNull: ['$subDocument.k', false] }, 1, 0] }
    },
    sum: {
      $sum: "$subDocument.v"
    },
    average: {
      $avg: "$subDocument.v"
    }
  }
},
{
  $sort: {
    _id: 1
  }
})

And the results looks like this:

{ "_id" : "field1", "count" : 2, "sum" : 48, "average" : 24 }
{ "_id" : "field2", "count" : 1, "sum" : 14, "average" : 14 }
{ "_id" : "field3", "count" : 3, "sum" : 19, "average" : 6.333333333333333 }
{ "_id" : "field4", "count" : 2, "sum" : 37, "average" : 18.5 }

Explanation

$project with $objectToArray

$objectToArray comes in handy in this case to destructure the object into [key, value] pairs.

$unwind the subDocument array

We want to have objects to get the fields, so you unwind (kind of "unzip") the array in distinct objects.

Add a type and filter by just numeric values

With these pipeline steps

  {
    $addFields: { 'type': {$type: '$subDocument.v'} }
  }, {
    $match: { type: { $in: ['int', 'double', 'long', 'decimal'] } }
  }

we add a "type" field to each document (that now represents each field with its value), and filter by just "number" types.

$group and extracts stats

In the $group stage we are grouping the the field name, namely $subDocument.k.

For each document that falls into this bucket, we can count how many matches there are, the sum of the values, and finally an average with $avg

Finally, $sort the results

Sort by the grouped field name to have them alphabetically ordered.

Most valuable developer linux notebooks in 2020

Found a typo? Edit this page on GitHub

Written on   2020-06-20

1100 words - 8 minutes πŸ•œ

Recently I got into Linux again after a long hiatus.

I fell in love with Pop!_OS by System76.

Ubuntu 20.04 also looks gorgeous and made me want to go back to the Linux side.

So I started looking for a worthy laptop for the job, with excellent support for Linux, and these are my findings.

The notebooks are grouped by the following vendors:

Prices are in Euro.

CPU Mark and Values are from cpubenchmark.net

Purposely excluded Purism and their Librem line for their niche product (privacy oriented), value for the buck and potential to later re-sell.

Let me know if you have better alternatives by getting in touch @christian_fei

I have put together a publicly available spreadsheet that contains all these information.

/assets/images/posts/developer-linux-laptop/google-sheet.png

Dell

I went straight to the XPS Developer Edition, without looking much at the other models to be honest.

Dell XPS 13 Developer Edition

/assets/images/posts/developer-linux-laptop/dell-xps-13.jpeg

Price 1429 € (with touch-screen 1628 €)

Weight 1.23 kg

Thickness 1,56 cm

CPU Intel Core i7-10510U 10th gen (8 MB cache, up to 4,9 GHz, quad-core)

CPU Mark and Value 7294 / 17,86

RAM 16 GB, LPDDR3, 2.133 MHz

Disk SSD 512 GB PCIe NVMe M.2

Display Display non touch-screen InfinityEdge FHD (1.920 x 1.080), 13,3"

Networking Killer AX1650 (2x2) integrated with Intel Wi-Fi 6 + Bluetooth 5.1

Battery 4 cells, 52 Wh

Charging Port Thunderbolt 3 Type-C

Warranty 1 year

Supports fingerprint

Ubuntu certified

Link

Lenovo

ThinkPad X1 Carbon Gen 8

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-x1-carbon.png

Price 1929 €

Weight 1,09 kg

Thickness 1,49cm

CPU Intel Core i7-10510U (1,80 GHz - 4,90 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 7294 / 17,88

RAM 16 GB LPDDR3 2.133 MHz

Disk SSD 512 GB, M.2 2280, PCIe-NVMe, OPAL, TLC

Display 14", FHD (1.920 x 1.080), IPS, anti-reflection, 400 nit, non-touch

Networking wireless 6, bluetooth 5

Battery lithium 4 cells, 51 Wh

Charging CA adapter 65 W (3 pin) + USB Type C (Italy)

Integrated Mobile Broadband

Supports fingerprint

Warranty 3 years

Ubuntu certified

Link

ThinkPad X1 Yoga (Gen 4)

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-x1-yoga.png

Price 2496 €

Weight 1,35 kg

Thickness 1,55 cm

CPU Intel Core i7-8665U (1,9 GHz, up to 4,8 GHz con Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 6672 / 16,31

RAM 16 GB LPDDR3 2.133 MHz

Disk SSD 512 GB, M.2 2280, PCIe-NVMe, OPAL, TLC

Display 14" UHD (3.840 x 2.160), IPS, AR/AS, 500 nit, multi-touch

Networking Intel Wireless-AC 9560 2x2 AC, Bluetooth version 5.0 vPro

Battery Lithium 4 cells, 51 Wh

Charging CA adapter 65 W (3 pin) + USB Type C (Italy)

Supports fingerprint

Warranty 3 years

Not Ubuntu certified

Link

ThinkPad X390

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-x390.jpg

Price 1924 €

Weight 1,22 kg

Thickness 1,69 cm

CPU Intel Core i7-8665U (1,9 GHz, 4,8 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 6626 / 16,2

RAM 16 GB DDR4 2.400 MHz

Disk SSD 512 GB, M.2 2280, PCIe-NVMe, OPAL, TLC

Display 13,3" FHD (1.920 x 1.080), IPS, anti-reflection, 300 nit, multi-touch

Networking Intel Wi-Fi 6 AX200 2x2 AX, Bluetooth 5.0 vPro

Battery lithium 6 cells, 48 Wh

Charging CA adapter 65 W (3 pin) + USB Type C (Italy)

Supports fingerprint

Warranty 3 years

Ubuntu certified

Link

ThinkPad T490s

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-t490s.jpg

Price 1724 €

Weight 1,3 kg

Thickness 1,72 cm

CPU Intel Core i7-8665U (1,9 GHz, 4,8 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 6626 / 16,2

RAM 16 GB DDR4 2.400 MHz

Disk SSD 512 GB, M.2 2280, NVMe, Opal

Display 14" FHD (1.920 x 1.080), IPS, anti-reflection, 300 nit, multi-touch

Networking Intel Wireless-AC 9560 2x2 AC, Bluetooth 5.0 vPro

Battery lithium 3 cells, 57 Wh

Charging CA adapter 65 W (3 pin) + USB Type C (Italy)

Supports fingerprint

Warranty 3 years

Ubuntu certified

Link

ThinkPad L13

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-l13.png

Price 1244 €

Weight 1,38 kg

Thickness 1,76 cm

CPU Intel Core i7-10510U (1,80 GHz, 4,90 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 7,294 / 17,83

RAM 16 GB DDR4 2.666 MHz

Disk SSD 512 GB, M.2 2280, PCIe-NVMe, OPAL, TLC

Display 13,3" FHD (1.920 x 1.080), IPS, anti-reflection, 300 nit, multi-touch

Networking Intel Wireless-AC 9560 2x2 AC, Bluetooth 5.0

Battery lithium 4 cells, 46 Wh

Charging CA adapter 65 W (3 pin) + USB Type C (Italy)

Supports fingerprint

Warranty 1 year

Ubuntu certified

Link

ThinkPad X1 Extreme Gen 2

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-x1-extreme.jpg

Price 2073 €

Weight 1,7 kg

Thickness 1,84

CPU Intel Core i7-9750H (cache 12 MB, up to 4,5 GHz Turbo Boost)

CPU Mark and Value 11503 / 29,12

RAM 16 GB SoDIMM DDR4 2.666 MHz

Disk SSD 512 GB, M.2 2280, NVMe, Opal

Screen 15,6" FHD (1.920 x 1.080), IPS, anti-reflection, 500 nit, non-touch

Networking Intel Wi-Fi 6 AX200 2x2 AX, Bluetooth versione 5.0

Battery lithium 4 cells, 80 Wh

Charging CA adapter 135 W (3 pin)

Supports fingerprint

Not Ubuntu certified

Link

ThinkPad P53s

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-p53s.png

Price 1820 €

Weight 1,78 kg

Thickness 1,91 cm

CPU Intel Core i7-8665U (1,9 GHz, 4,8 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 6626 / 16,2

RAM 24 GB DDR4 2.400 MHz (8 GB integrated + SoDIMM 16 GB)

Disk SSD 512 GB, M.2 2280, NVMe, Opal

Display 15,6" FHD (1.920 x 1.080), IPS, anti-reflection, 250 nit, multi-touch NVIDIA Quadro P520 2 GB GDDR5 64 bit

Networking Intel Wireless-AC 9560 2x2 AC, Bluetooth 5.0 vPro

Battery lithium 3 cells, 57 Wh

Charging CA adapter 65 W PCC (3 pin) - USB Type C (Italy)

Supports fingerprint

Warranty 3 years

Ubuntu certified

Link

ThinkPad L490

/assets/images/posts/developer-linux-laptop/lenovo-thinkpad-l490.png

Price 1386 €

Weight 1,69 kg

Thickness 2,25 cm

CPU Intel Core i7-8665U (1,9 GHz, 4,8 GHz Turbo Boost, 4 core, cache 8 MB)

CPU Mark and Value 6626 / 16,2

RAM 16 GB SoDIMM DDR4 2.666 MHz

Disk SSD 512 GB, M.2 2280, NVMe, Opal

Display 14"" HD (1.366 x 768), TN, 220 nit, anti-reflection, non touch

Networking Intel Wireless-AC 9260 2x2 AC, Bluetooth 5.0 vPro

Battery lithium 3 cells, 45 Wh

Charging CA adapter 65 W PCC (3 pin) - USB Type C (Italy)

Supports fingerprint

Warrany 1 year

Ubuntu certified

Link

System76

Lemur pro

/assets/images/posts/developer-linux-laptop/system76-lemur-pro.jpg

Price 1377 €

Weight 0,99 kg

Thickness 1,55 cm

CPU Intel Core i7-10510U (1.8 up to 4.9 GHz - 8MB Cache - 4 Cores - 8 Threads)

CPU Mark and Value 7294 / 17,85

RAM 16 GB DDR4 at 2666 MHz (8GB+8GB)

Disk SSD 500GB Seq Read: 3,500 MB/s, Seq Write: 3,200 MB/s

Display 14.1β€³ 1920Γ—1080 FHD, Matte Finish, non-touch

Networking Upgrade to WiFi 6 + Bluetooth

Battery Li-Ion - 73 Wh

Charging 65 W, AC-in 100–240 V, 50–60 Hz and 65W+ USB Type-C Charging Compatible

Warranty 1 year

Not Ubuntu certified

Link

Darter pro

/assets/images/posts/developer-linux-laptop/system76-darter-pro.png

Price 1288 €

Weight 1,6 kg

Thickness 2,44 cm

CPU Intel Core i7-10510U (1.8 up to 4.9 GHz - 8MB Cache - 4 Cores - 8 Threads)

CPU Mark and Value 7294 / 17,84

RAM 16 GB Dual Channel DDR4 at 2666 MHz (2x8GB)

Disk SSD 500GB Seq Read: 3,500 MB/s, Seq Write: 3,200 MB/s

Display 15.6" 1920x1080 Matte FHD IPS

Networking Upgrade to WiFi 6 + Bluetooth

Battery Li-Ion - 54.5 Wh

Charging 65 W, AC-in 100–240 V, 50–60 Hz

Warranty 1 year

Not Ubuntu certified

Link

Install Homeassistant on Raspberry Pi

Found a typo? Edit this page on GitHub

Written on   2020-06-07

326 words - 2 minutes πŸ•œ

home-assistant.io is the latest great tool I discovered, it's simply a beautiful piece of technology.

Open source home automation that puts local control and privacy first

I recommend installing the Hass OS on a Raspberry Pi 4 with an SD Card of at least 32GB.

Preparation

Download the system image from here based on your Raspberry Pi model.

Flash the hassos_rpi4-4.8.img.gz using Balena Etcher.

Use the Raspberry Pi 4 Model B 32bit image instead of the 64bit version

It's as simple as selecting the Image (img.gz is fine, gets decompressed on the fly), select the volume (32GB+ SD Card or alternatively USB Stick) and click "Flash".

homeassistant

Insert the SD Card and Connect the Pi to Ethernet.

Optionally, setup a CONFIG/network/my-network with your WiFi network conf in the hassos-boot partition of the SD Card

Installation

Boot up your Pi.

Wait until the installation finishes, it takes about 15-20'.

Refresh the main page at homeassistant.local:8123 and continue the setup of your Homeassistant user.

Addons

Once the main user is set up, you can install add-ons and personalize your Hass dashboard.

These are a few with which I'm experimenting:

  • AirCast (stream audio to your Chromecast from an iOS device)
  • Duck DNS (Dynamic DNS service to access your Hass dashboard outside of your home)
  • File Editor (browser-based file editor)
  • Hey Ada! (Privacy focused Voice assistant)
  • Mosquitto Broker
  • Spotify Connect (Play Spotify music on your Hass device)
  • Terminal & SSH (remote login through SSH via browser)

addons

To install some, click on "Supervisor" in the sidebar and go to "Add-on Store"

Snapshot your system config

Once set up, installed and configured your favorite add-ons, it's best to back up your configuration.

snapshot

Next steps

Configure the Dashboard at your liking.

Personally I have to find out how to interact with MQTT, connect multiple Raspberry's and let them communicate with each other.

I also want to see the Camera I have connected to my other Raspberry Pi Zero W and have a livestream of that on the Dashboard.

That's all! Have fun, and let me know what you came up with @christian_fei!

Notes on "Code BEAM V 2020"

Found a typo? Edit this page on GitHub

Written on   2020-05-28

2298 words - 18 minutes πŸ•œ

What: Code BEAM V official website

When: 2020/05/28 - 2020/05/29

Where: On the interwebz

Schedule: code-beam-v#schedule

Table of talks:

Day 1

Day 2

Opening Keynote - The Future of Programming

Talk 1 ~ 15:00 CEST

assets/images/posts/beam-v/222-participants.png

"It was crazy building soft near-realtime system 20 years ago" ~ Casarini

"Erlang is actually a Domain-Specific-Language written in C, tell your managers that and they'll buy that"

JVM for speed and parallelism.

BEAM for scalability and concurrency.

The predecessor of the BEAM was the JAM: Joe's Abstract Machine!

Looking at the future

Distribution: no worries about threads breaking, corrupting your memory, introduces latency, but is pretty fast at the end of the day.

The future is distributed.

BEAM

The BEAM can be seen as an OS with a hypervisor on top of your OS.

assets/images/posts/beam-v/beam-os.png

The BEAM has always been "cloud-native", not relying on particular OS or distribution.

The BEAM helps to abstract away from Distribution, abstract away from OS.

Letting you Focus on business logic.

assets/images/posts/beam-v/osi-1.png

You should be able to focus mostly just on the business side of programming, the actual business logic.

Erlang/Elixir helps you with that by e.g. using a gen-server and implementing just call or cast calls, without spending time on the underlying complexity.

Programming Model

Many different programming models

  • Functional Reactive
  • Lamdbas
  • Event-driven
  • Pub/Sub
  • Actor Model

OTP will let you scale both vertically and horizontally, with a simple programming model.

assets/images/posts/beam-v/osi-2.png

Abstract the infrastructure

Hopefully in the future, we won't be talking about Kubernetes, Lambdas, Docker-compose and abstract away the infrastructure.

Let's stop talking about tools, let's focus on valid, rock-solid abstractions.

This will help to avoid making software developers also Network Engineers.

If you don't have the correct abstractions in place, you are going to have problems evolving with time.

If you don't have a tool that is abstracted away on layer 7, let it be.

assets/images/posts/beam-v/osi-other.png

"The future is concurrent, distributed and properly abstracted!"

Images kindly provided by Francesco Cesarini


Building adaptive systems

Talk 2 ~ 15:55 CEST - Chris Keathley

Request spikes happen. Overloads can be nightmares. Latencies are introduced.

This can happen for both internal and external services, DB, APIs, etc.

"All services have objectives". For example requests/second

A resilient service should be able to withstand a 10x traffic spike and continue to meet those objectives.

Queues and Overloads

Most systems boil down to queues.

It contains work that needs to be processed.

Overload happens:

Arrival Rate (how quick items show up) > Processing Time (how much it takes to process an item)

Little's Law

Elements in the queue = Avg Arrival Rate * Avg Processing Time

CPU pressure happens when too many BEAM processes are created, and not processed in time to empty the queue.

This slow down the queue and scheduler of the BEAM and things can fall apart.

Overload Mitigation

You need to get Arrival Rate and Processing Time under control.

It would be obvious to start dropping items from the queue.

The server that processes the queue items could drop requests, or queue items themself and eventually evict items. (Downstream solution)

You could also stop sending requests all together to the downstream server (Upstream solution). Mitigated the load on the downstream.

Autoscaling

Autoscaling is not a silver bullet.

If you DB is under load, and you auto-scale your server, you just made things worse.

You need to keep in mind the Load shedding into the equation.

Circuit Breakers

If a server is under load, you can shut off the traffic to that server and let it "heal".

Circuit breakers are your last line of defense.

Circuit breakers should be dynamic and adapted your domain. You cannot have a static circuit breaker in a dynamic domain of yours.

Adaptive Concurrency

Read papers about Congestion Avoidance and Control.

Resilient to failures, a self-healing internet and systems.

Dynamically discover Adaptive limits by probing your system and seeing the actual limit of services.

Additive Increase Mutliplicative Decrease

You backoff much faster than you grow.

Tools and ideas: fuse and regulator (on github).

Backpressure

Backpressure can work great for internal services, but for e.g. spike in users your system needs to dynamically adapt to the circumstances.

Adopting Erlang, Adapting Rebar

It's easy to pick up a book, read the theories, but often get stuck in the more practical stuff.

Adopting Production Erlang:

with docker

  • efficient building
    • cache deps with rebar.lock
    • store local hex package cache
  • runtime concerns
    • busy wait (Erlangs' scheduler goes to sleep with a tight loop, burning your CPU)
    • schedulers
    • zombie processes

Most of the issues are fixed in OTP-23 and rebar 3 3.14

with kubernetes

Similar concerns as with docker.

You will get throttled if you reach certain CPU limits.

relx

Predecessor of rebar3.

Previously standalone escript.

Slimmed down and sped up, simplified configurations.

Dev, prod, and minimal mode.

Elixir meets Erlang in Blockchain development

Checkout Aeternity Blockchain!

Trust and Useability

Blockchains are trustless, distributed state-network.

Less cool: speed is terrible, useability is often quite bad.

This because useability comes to the cost of given trust to a certain authority.

Architecture

The FSM (finite state machine) handles channel protocol.

Watcher detects on-chain activity.

State Cache helps restore off-chain state.

assets/images/posts/beam-v/elixir-blockchain.png

State Channels

Created Off-chain. Speed, scalability. Still Trustless.

Off-chain as in "no-chain".

You pass tokens back and forth until the contract is concluded. It could take 10 minutes or 6 months.

Your smart contract is the "protocol" you're defining. Deploy the contract in the state channel.

coin-toss casino example

assets/images/posts/beam-v/coin-toss.png

More info here on github.com/aeternity and aeternity.com

Elixir update

Current version Elixir 1.10, January 2020. 900 contributors. 10k+ packages on hex.pm. 1.3B+ downloads.

Erlang/OPT21+ requirement

This because Elixir fully integrates with Erlang's new logger, everything is shared.

New guards: is_map_key/2 and is_struct/1.

Compilation tracers

Compilation in Elixir is the same as execution code.

This is because you can conditionally define functions and modules.

defmodule MyMod do
  if OtherMod.some_condition? do
    def some_fun do
      ...
    end
  end
end

This is where compilation tracers come into play. Receive a bunch of events (compile module, define functions, etc.).

Useful for static code analysers.

Important foundation for the language.

Compilation environment

Application environment is read at compile time, not at run time.

assets/images/posts/beam-v/compile-env.png

You can now use Application.compile_env to read variables at compile time.

ExUnit Pattern Diffing

If you're interested in just a few fields of a struct, now Eixir gives you more readable traces.

assets/images/posts/beam-v/ex-unit-diffing.png

Future

1.11 in October 2020!

Calendar.strftime/3 for datetime formatting

New log levels, notice, critical alert and emergency.

Warnings if using modules from non-dependencies.

Read Erlang docs from the Elixir Shell!

Phoenix LiveDashboard

Comes with every new phoenix application (v1.5 and up).

Request logging, metrics etc.

Closing Keynote - The Tyranny of Structurlessness

How more meaningful code can make your project more resilient & maintainable

People want more and more from applications as times goes on!

-> Complexity grows at an exponentional rate.

-> more flexible and easier to scale

Let's focus on domain and structure!

Good elixir

Functional core, imperative shell

Inner data manipulation in clean isolated environment, actions runs on the outer layer (side-effects).

Another leayer between the outer layer and functional core: Semantic DSL / OO

Decouple imperative outer shell with inner pure-logic function core.

Brings to higher reuse of code.

Testing

Prop + model testing for function core.

Huge gains and cleaner code.

Tradeoffs

  • Exchange granular control for structure
  • humans over machines
  • meaning over mechanics
  • safer!

Actor Abyss

Each step is very simple in an actor-based application.

Reasoning about dynamic organisms is difficult.

Complexity grows faster.

Composition

Composition is at the heart of modularity

Orthogonality is at the heart of composition

assets/images/posts/beam-v/composition.png

no reinventing the wheel

GenServers etc are pretty low level! Add semantics to them!

A common example

def get(map, key, default \\ nil)

%{a: 1} |> Map.get(:b, 4)
#=> 4

def fallback(nil, default), do: default
def fallback(val, _), do: value

%{a: 1} |> Map.get(:b) |> fallback(4)
#=> 4

[] |> List.first() |> fallback(:empty)
#=> :empty

So instead of adding a third parameter to every function that implements the Enumerable protocol, you can abstract that semantic away!

good interfaces != good abstractions

Find a common interface with higher higher semantic density (focused on meaning not mechanics)

Define front-end and back-end interfaces well (could be sync and async!)

Declarative, configurable data flow, super extensible:

defimple Dataflow, for: %Stream{}
defimple Dataflow, for: %Distributed{}
defimple Dataflow, for: %Broadway{}

Summary

Protocols super useful for DDD

Add a semantic layer to your application code, based on your domain

Test your distributed system by looking at the properties

Prop-testing useful for structured abstractions

Opening Keynote - Problem led Software Design

Boyd Multerer and Robert Virding

The Erlang Problem

How can we improve the programming of Telephone applications? Very complex to build and maintain.

  • Handling very large number of concurrent activities
  • perform certain actions within a certain time
  • system distributed over several computers
  • maintainance without stopping the system
  • fault tolerance for hardware failures and software errors

These are not problems just for telecom!

Internal development

  • many threads at the same time
  • understanding and evolving the problem domain
  • designing language and arch
  • testing the idea and how it would perform

Erlang tested in Ericsson in project ACS/Dunder. The first users of Erlang in a real product

Doing lots of experiments and testing in the real world.

First principles

Lightweight concurrency, async communication. Process isolation (no effect on other processes), error handling (detect and handle errors), with soft real-time, non-blocking features.

Language/system should be simple, with a high level language to get real benefits.

Provide tools for building systems, not solutions. Basic operations needed for building communication protocols and error handling.


"Build for the future", so that your solutions don't bring to new problems, but the other way around.

Try to think of the problems you're not solving.

Think about the Actual problems, problems of purpose.

Problem Discovery! In Erlang they ahd the problems of the platform, because to test the platform in real live you need a real life problem and application. Kinda chicken-and-egg problem.

Scenic

By XBOX founder

OTP based control hierarchy, fast process based restarts

Jelly theory of development

Software is never really finished. If you touch it, it's gonna mold, change and adapt.

It's a continuous development and hacking target, Discover new problems emerging from usage.

Checkout key10.com and Scenic!

How the BEAM will change your mind

By Laura M. Castro

Like when in functional programming world you say, "you need to change your mind".

On an abstract level FP advocates sound fine, but getting more practical and approaching it is a different thing.

The imperative approach

"The God complex"

You see all data, you have control over all of it, you know it all (algorithm).

The object-oriented approach

"The Despicable master"

Small buckets of data, the bucket know about themselves.

The buckets don't know about each other. They can be seen as minions that do small tasks on their own data.

A master is still needed to orchestrate all together.

The functional approach

"The BEAM approach"

Each process has their own duties, can go bad but the world needs to go on. The processes can be supervised and monitored.

Eventually all characters work together and a result, a stable situation is reached.

Example: Project degree assignment process

  • each student is a process
  • each assignment is a process
  • administration process, knows if students fulfil criteria and marks
  • statistical information process

All supervised.

Example application: https://gitlab.com/lauramcastro/sorted

Could have gone with imperative approach.

Pros:

  • increased reliability
  • distributed responsibility
  • concurrent execution

Cons:

  • harder to control
  • harder to exaplin
  • harder to test

Take aways

  • paradigms do affect the way you think
  • strength and weaknesses
  • respect the paradigm
  • not everything is a nail

Gleam: Lean BEAM typing machine

By Louis Pilfold

BEAM and Erlang created to support multiple phone calls at once, failsafe system and perform a great number of tasks at the same time.

Erlang is a unit of concurrency. Erlang is a unit, how many tasks can be handled per second.

Named after Agner Erlang.

Each thread / process handles a single unit, and can be sequential.

"It is difficult to write a web server to handle 2 million session. But it's easy to write 1 server, and replicate it 2 million times" - Joe

No shared state between process, but instead communicates by sending messages between each other.

Distributed computing features built into the BEAM, process can send messages between each other between the same computer or a cluster of computers.

If something goes wrong, the error is contained in the smallest possible sub-system: an erlang process.

In this sense, BEAM is similar to Kubernetes, although no shared state and more granular when it comes to fail-over.

They operate at a different level of abstraction.

The BEAM, almost by chance, became super useful for webservers, databases, queues, other systems!

Languages on the BEAM: Erlang, Elixir, Gleam and others.

The BEAM gives us a the tools to tolerate mistakes, e.g. Bugs in production!

Gleam and ML languages

Compiler as a tool, as a pair programming buddy that gives you hint what might go wrong.

Complementing the fault tolerance of the BEAM, with the compiler and static analysis of Gleam.

Helps reduces and minimize the feedback loop before going to production with an error introduced by a programmer.

Gleam tries to be a type-safe version of OTP and Erlang!

Offensive programming

If your business logic expects something to work, don't be defensive on it.

Assert that every step worked as expected and return as soon as possible if there is an error.

Let is crash!

Differentiate on errors:

  • user input
  • "expected" errors (network failures, external services down)
  • unexpected errors (Oops)

gleam.run

Cotonic: browser coroutines with an universal MQTT message bus

By Marc Worrell

MQTT - Message Queueing Telemetry Transport

A communication protocol often used in IoT

Uses topic trees, with wild-cards, can be deep as you want.

Can have anything as a payload, erlang termns, binaries etc.

Client connect to server broker (over Web Sockets), over a bridge.

Example at cotonic.org/examples/chat

assets/images/posts/beam-v/mqtt-cotonic.png

Security

We need ACL, and not everything on the same bus.

Privacy!

Through a client-id and routing-id (can be seen as public IP).

Matching routing-id replies only to public and response topic.

Every access and message is authenticated (+ACL) through and Auth Cookie in Zotonic/Cotonic.

Some payloads are very private: password, chats etc.

-> Encryption

Key server, handshake to secure trust.

It has a table with communication keys for each client.

The client requests a key-id.

Encryption/decryption through a key-id.

An update from the OTP team

blog.erlang.org/OTP-23-Highlights/

Closing Keynote: An update from the Erlang Ecosystem Foundation working groups

erlef.org

non-profit organisation, with people from the community volunteering to grow the ecosystem.

400+ Members! 11 Working Groups!

Some working groups

  • Sponsorship
  • Fellowship
  • Infrastructure
  • Documentation (unifying way of documenting code in erlang + elixir and beam in general)
  • Language Interoperability (interop between langs e.g. elixir, gleam, etc.)
  • Education
  • Building and Packaging (rebar, hex etc)
  • Observability (Open Telemetry and Code tracing)
  • Security (secure coding and deployment hardening etc)

erlef.org/become-a-sponsor

Next Conferences

assets/images/posts/beam-v/next-conf.png

Useful resources

Get distinct field names of sub documents in MongoDB Aggregations with sub-documents in MongoDB Most valuable developer linux notebooks in 2020 Install Homeassistant on Raspberry Pi Notes on "Code BEAM V 2020" visit the /archive to see all 142 posts Authenticated uplinks with verdaccio Testing in Node.js by example using the SOLID principles Clean up Mac OS: How I freed 35GB of space Fixing 431 Request Header Fields Too Large in Node.js Setting up a Verdaccio npm registry Privacy and Coherence Elixir trick: start an Observer window with mix Validate your RSS feed Minimal dark mode with CSS and JavaScript Using Β΅compress to dramatically optimize static assets visit the /archive to see all 142 posts Ad blocking with Raspberry Pi and Pi-hole Optional chaining in Node.js 14 Nullish coalescing in Node.js 14 Road to Elixir: Monitor Crypto assets Resuming Elixir by self-hosting plausible analytics