call_end

    • chevron_right

      Mathieu Pasquet: slixmpp v1.10

      news.movim.eu / PlanetJabber • 26 March • 2 minutes

    This new version does not have many new features, but it has quite a few breaking changes, which should not impact many people, as well as one important security fix.

    Thanks to everyone who contributed with code, issues, suggestions, and reviews!

    Security

    After working on TLS stuff, I noticed that we still allowed unencrypted SCRAM to be negociated, which is really not good. For packagers who only want this security fix, the commit fd66aef38d48b6474654cbe87464d7d416d6a5f3 should apply cleanly on any slixmpp version.

    (most servers in the wild have unencrypted connections entirely disabled, so this is only an issue for Man in the Middle attacks)

    Enhancements

    • slixmpp now supports XEP-0368 and allows to choose easily between direct TLS, or STARTTLS.

    Breaking Changes

    • The security issue mentioned above is a breaking change if you actively want to connect to servers without encryption. If that is a desired behavior, you can still set xmpp['feature_mechanisms'].unencrypted_scram = True on init.

    • Removal of the timeout_callback parameter anywhere it was present. Users are encouraged to await on the coroutine or the future returned by the function, which will raise an IqTimeout exception when appropriate.

    • Removal of the custom google plugins, which I am guessing have not worked in a very long time (both the google and gmail_notify plugin).

    • Removal of the Stream Compression ( XEP-0138 ) plugin. It was not working at all and use of compression is actively discouraged for security reasons .

    • Due to the new connection code, the configuration of the connection parameters has changed quite a bit:

      • The XMLStream (from which inherits the ClientXMPP class) does not have a use_ssl parameter anymore. Instead it has enable_direct_tls and enable_starttls as well as enable_plaintext attributes. Those attributes control whether we want to connect using starttls or direct TLS. The plaintext is for components since we only implement the jabber component protocol ( XEP-0114 ).
      • Handling of custom addresses has changed a bit, now they are set through calling connect() , and kept until connect() is called without arguments again.
      • The DNS code will now fetch both xmpps-client and xmpp-client records (unless direct TLS is explicitly disabled) and prefer direct TLS if it has the same priority as STARTTLS.
      • The SRV targeted by the queries can be customized using the tls_services and starttls_services of ClientXMPP (but have no idea why anyone would do this)

    Fixes

    • Another issue encountered with the Rust JID, trying to compare a JID against strings that cannot be parsed or other objects would raise an InvalidJID exception instead of returning False .
    • The ssl_cert event would only be invoked on STARTTLS.
    • One of the asyncio warnings on program exit (that a coroutine is still running).
    • Traceback with BaseXMPP.get .
    • A potential edge case in the disco ( XEP-0030 ) plugin when using strings instead of JIDs.
    • A traceback in vcard-temp ( XEP-0054 ) and Legacy Delayed Delivery ( XEP-0091 ) when parsing datetimes.
    • A traceback when manipulating conditions in feature mechanisms.
    • A traceback in Ad-hoc commands ( XEP-0050 ) during error handling.
    • Many tracebacks in OAuth over XMPP ( XEP-0235 ) due to urllib API changes.

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    • wifi_tethering open_in_new

      This post is public

      blog.mathieui.net /en/slixmpp-1.10.html

    • chevron_right

      Kaidan: Kaidan 0.12.0: User Interface Polishing and Account Migration Fixes

      news.movim.eu / PlanetJabber • 20 March • 1 minute

    Kaidan 0.12.0 looks and behaves better than ever before! Chats can now quickly be pinned and moved. In addition, the list of group chat participants to mention them is placed above the cursor if enough space is available. With this release, OMEMO can be used right after migrating an account and migrated contacts are correctly verified.

    Have a look at the changelog for more details.

    Changelog

    Features:

    • Use square selection to crop avatars (fazevedo)
    • Use background with rounded corners for chat list items (melvo)
    • Remove colored availability indicator from chat list item (melvo)
    • Display group chat participant picker above text cursor in large windows (melvo)
    • Do not allow to enter/send messages without visible characters (melvo)
    • Remove leading/trailing whitespace from exchanged messages (melvo)
    • Ignore received messages without displayable content if they cannot be otherwise processed (melvo)
    • Allow to show/hide buttons to pin/move chat list items (melvo)

    Bugfixes:

    • Fix style for Flatpak (melvo)
    • Fix displaying video thumbnails and opening files for Flatpak (melvo)
    • Fix message reaction details not opening a second time (melvo)
    • Fix opening contact addition view on receiving XMPP URIs (melvo)
    • Fix format of text following emojis (melvo)
    • Fix eliding last message text for chat list item (melvo)
    • Fix unit tests (mlaurent, fazevedo, melvo)
    • Fix storing downloaded files with unique names (melvo)
    • Fix overlay to change/open avatars shown before hovered in account/contact details (melvo)
    • Fix verification of moved contacts (fazevedo)
    • Fix setting up end-to-end encryption (OMEMO 2) after account migration (melvo)

    Notes:

    • Kaidan requires KWindowSystem and KDSingleApplication now (mlaurent)
    • Kaidan requires KDE Frameworks 6.11 now
    • Kaidan requires KQuickImageEditor 0.5 now
    • Kaidan requires QXmpp 1.10.3 now

    Download

    Or install Kaidan for your distribution:

    Packaging status

    • wifi_tethering open_in_new

      This post is public

      kaidan.im /2025/03/21/kaidan-0.12.0/

    • chevron_right

      Erlang Solutions: Meet the team: Lorena Mireles

      news.movim.eu / PlanetJabber • 20 March • 3 minutes

    Lorena Mireles is an influential force in the BEAM community, known for her work as an Elixir developer and as a dedicated member of the Code BEAM America programme committee. She’s been instrumental in fostering connections and shaping discussions that help drive the future of Elixir.

    In this interview, Lorena opens up about her journey with Elixir, her role on the committee, and what makes the BEAM community so unique.

    Meet the team: Lorena Mireles

    What first drew you to Elixir, and what keeps you hooked?

    The community was, without a doubt, the first reason I became interested in Elixir. I had no prior knowledge of Elixir the first time I attended a conference, but I felt very comfortable at the talks. The explanations were clear and interesting, which motivated me to investigate the programming language further.

    Also, everyone was very kind and willing to share their knowledge. Over time, I discovered the advantages of this programming language for designing powerful systems. I’m still amazed at how easy it is to create projects with complex technical requirements, all thanks to the way Elixir and BEAM were created, and all the material available to learn about them.

    How did you get involved with the Code BEAM America committee, and what’s that experience been like?

    I joined the committee at the invitation of the organisers, and I’m very grateful, as I’ve been a part of it for three consecutive editions, and I continue to learn and be surprised each time.

    My work focuses primarily on promoting women’s participation at the conference and supporting the diversity program, which has allowed me to meet great women and learn about their projects and experiences. Overall, it’s a great opportunity to get to know the speakers a little better and get involved in the BEAM community.

    I also learn about new topics, as seeing the talks they submit also motivates me to explore them.

    What were your standout moments from this year’s Code BEAM America?

    I’ll start with my favorite- reconnecting with the BEAM community. I admire so many people and their work, so Code BEAM America was a great experience to learn more about it. I also loved seeing the new speakers and first-time attendees. I chatted with some of them, and they loved the experience. It was great to get their feedback.

    The keynotes were also some of my favorites. Machine Learning and AI were discussed, which seemed very appropriate given the current relevance of these topics. There were also a couple of talks focused on social aspects, which are always necessary to foster continuous improvement in teams.

    What excites you most about the future of the BEAM community?

    All the projects that will likely be happening this year. At this year’s Code BEAM, I met new speakers and saw new attendees, which means the knowledge continues to expand and the community grows, and that also means new projects and more material about Elixir and BEAM in general.

    I’m excited to think about all the new things we’ll see and how we continue to encourage new people to participate because, without a doubt, Elixir is a programming language worth learning.

    Final thoughts

    Lorena’s experience with Elixir and her role in the BEAM community show just how powerful collaboration and innovation can be in shaping the ecosystem. Beyond that, her Women in BEAM survey and Women in Elixir webinar are amazing resources she’s put together to foster more inclusivity in the community.

    You can find her on social media channels below, so feel free to reach out and connect!

    The post Meet the team: Lorena Mireles appeared first on Erlang Solutions .

    • chevron_right

      Prosodical Thoughts: Prosody 13.0.0 released!

      news.movim.eu / PlanetJabber • 17 March • 7 minutes

    Welcome to a new major release of the Prosody XMPP server! While the 0.12 branch has served us well for a while now, this release brings a bunch of new features we’ve been busy polishing.

    If you’re unfamiliar with Prosody, it’s an open-source project that implements XMPP , an open standard protocol for online communication. Prosody is widely used to power everything from small self-hosted messaging servers to worldwide real-time applications such as Jitsi Meet. It’s part of a large ecosystem of compatible software that you can use for realtime online communication.

    Before we begin…

    The first thing anyone who has been following the project for a while will notice about this release is the version number.

    Long adherents of the cult of 0ver , we finally decided it was time to break away. While, as Shakespeare wrote, “That which we call a rose, by any other name would smell as sweet”, such is true of version numbers. Prosody has been stable and used in production deployments for many years, however the ‘0.x’ version number occasionally misled people to believe otherwise. Apart from shifting the middle component leftwards, nothing has changed.

    If you’re really curious, you can read full details in our versioning and support policy .

    Our version numbers have also been in step with Debian’s for several versions now. Could this become a thing? Maybe!

    Overview of changes

    This release brings a wide range of improvements that make Prosody more secure, performant, and easier to manage than ever before. Let’s review the most significant changes that administrators and users can look forward to across a range of different topics.

    Security and authentication

    Security takes centre stage in this release with several notable improvements. Building on DNSSEC, the addition of full DANE support for server-to-server connections strengthens the trust between federating XMPP servers.

    We’ve enhanced our support for channel binding, which is now compatible with TLS 1.3, and we’ve added support for XEP-0440 which helps clients know which channel binding methods the server supports. Channel binding protects your connection from certain machine-in-the-middle attacks, even if the server’s TLS certificate is compromised.

    Account management

    Administrators now have more granular control over user accounts with the ability to disable and enable them as needed. This can be particularly useful for public servers, where disabling an account can act as a reversible alternative to deletion.

    In fact, we now have the ability to set a grace period for deleted accounts to allow restoring an account (within the grace period) in case of accidental deletion.

    Roles and permissions

    A new role and permissions framework provides more flexible access control. Prosody supplies several built-in roles:

    • prosody:operator - for operators of the whole Prosody instance. By default, accounts with this role have full access, including to operations that affect the whole server.
    • prosody:admin - the usual role for admins of a specific virtual host (or component). Accounts with this role have permission to manage user accounts and various other aspects of the domain.
    • prosody:member - this role is for “normal” user accounts, but specifically those ones which are trusted to some extent by the administrators. Typically accounts that are created through an invitation or through manual provisioning by the admin have this role.
    • prosody:registered - this role is also for general user accounts, but is used by default for accounts which registered themselves, e.g. if the server has in-band registration enabled.
    • prosody:guest - finally, the “guest” role is used for temporary/anonymous accounts and is also the default for remote JIDs interacting with the server.

    For more details about how to use these roles, customize permissions, and more, read our new roles and permissions documentation . You will also find the link there for the development documentation, so module developers can make use of the new system.

    Shell commands

    Since the earliest releases, the prosodyctl command has been the admin’s primary way of managing and interacting with Prosody. In 0.12 we introduced the prosodyctl shell interface to send administrative commands to Prosody at runtime via a local connection. It has been a big success, and this release significantly extends its capabilities.

    • prosodyctl adduser/passwd/deluser commands now use the admin connection to create users, which improves compatibility with various storage and authentication plugins. It also ensures Prosody can instantly respond to changes, such as immediately disconnecting users when their account is deleted.
    • Pubsub management commands have been added, to create/configure/delete nodes and items on pubsub and PEP services without needing an XMPP client.
    • One of our own favourites as Prosody developers is the new prosodyctl shell watch log command, which lets you stream debug logs in real-time without needing to store them on the filesystem.
    • Similarly, there is now prosodyctl shell watch stanzas which lets you monitor stanzas to/from arbitrary JIDs, which is incredibly helpful for developers trying to diagnose various client issues.
    • Server-wide announcements can now be sent via the shell, optionally limiting the recipients by online status or role.
    • MUC has gained a few new commands for interacting with MUC rooms.

    Improved Multi-User Chat (MUC) Management

    The MUC system has received a significant overhaul focusing on security and administrative control. By default, room creation is now restricted to local users, providing better control over who can create persistent and public rooms.

    Server administrators get new shell commands to inspect room occupants and affiliations, making day-to-day operations more straightforward.

    One notable change is that component admins are no longer automatically owners. This can be reverted to the old behaviour with component_admins_as_room_owners = true in the config, but this has known incompatibilities with some clients. Instead, admins can use the shell or ad-hoc commands to gain ownership of rooms when it’s necessary.

    Better Network Performance

    Network connectivity sees substantial improvements with the implementation of RFC 8305’s “Happy Eyeballs” algorithm, which enhances IPv4/IPv6 dual-stack performance and increases the chance of a successful connection.

    Support for TCP Fast Open and deferred accept capabilities (in the server_epoll backend) can potentially reduce connection latency.

    The server now also better handles SRV record selection by respecting the ‘weight’ parameter, leading to more efficient connection distribution.

    Storage and Performance Improvements

    Under the hood, Prosody now offers better query performance with its internal archive stores by generating indexes.

    SQLite users now have the option to use LuaSQLite3 instead of LuaDBI, potentially offering better performance and easier deployment.

    We’ve also added compatibility with SQLCipher , a fork of SQLite that adds support for encrypted databases.

    Configuration Improvements

    The configuration system has been modernized to support referencing and appending to previously set options, making complex configurations more manageable.

    While direct Lua API usage in the config file is now deprecated, it remains accessible through the new Lua.* namespace for those who need it.

    Also new in this release is the ability to reference credentials or other secrets in the configuration file, without storing them in the file itself. It is compatible with the credentials mechanisms supported by systemd , podman and more.

    Developer/API changes

    The development experience has always been an important part of our project - we set out to make an XMPP server that was very easy to extend and customize. Our developer API has improved with every release. We’ve even had first-time coders write Prosody plugins!

    There are too many improvements to list here, but some notable ones:

    • Storage access from modules has been simplified with a new ‘keyval+’ store type, which combines the old ‘keyval’ (default) and ‘map’ stores into a single interface. Before this change, many modules had to open the store twice to utilize the two APIs.
    • Any module can now replace custom permission handling with Prosody’s own permission framework via the simple module:may() API call.
    • Providing new commands for prosodyctl shell is now much easier for module developers.

    Backwards compatibility is of course generally preserved, although is_admin() has been deprecated in favour of module:may() . Modules that want to remain compatible with older versions can use mod_compat_roles to enable (limited) permission functionality.

    Important Notes for Upgrading

    A few breaking changes are worth noting:

    • Lua 5.1 support has been removed (this also breaks compatibility with LuaJIT, which is based primarily on Lua 5.1).
    • Some MUC default behaviors have changed regarding room creation and admin permissions (see above).

    Conclusion

    We’re very excited about this release, which represents a significant step forward for Prosody, and contains improvements for virtually every aspect of the server. From enhanced security to better performance and more flexible administration tools, there has never been a better time to deploy Prosody and take control of your realtime communications.

    As always, if you have any problems or questions with Prosody or the new release, drop by our community chat !

    • wifi_tethering open_in_new

      This post is public

      blog.prosody.im /prosody-13.0.0-released/

    • chevron_right

      Erlang Solutions: Elixir vs Haskell: What’s the Difference?

      news.movim.eu / PlanetJabber • 13 March • 11 minutes

    Elixir and Haskell are two very powerful, very popular programming languages. However, each has its strengths and weaknesses. Whilst they are similar in a few ways, it’s their differences that make them more suitable for certain tasks.

    Here’s an Elixir vs Haskell comparison.

    Elixir vs Haskell: a comparison

    Core philosophy and design goals

    Starting at a top-level view of both languages, the first difference we see is in their fundamental philosophies. Both are functional languages. However, their design choices reflect very different priorities.

    Elixir is designed for the real world. It runs on the Erlang VM (BEAM), which was built to handle massive concurrency, distributed systems, and applications that can’t afford downtime, like telecoms, messaging platforms, and web apps.

    Elixir prioritises:

    • Concurrency-first – It uses lightweight processes and message passing to make scalability easier.
    • Fault tolerance – It follows a “Let it crash” philosophy to ensure failures don’t take down the whole system.
    • Developer-friendly – Its Ruby-like syntax makes functional programming approachable and readable.

    Elixir is not designed for theoretic rigidness—it’s practical. It gives you the tools you need to build robust, scalable systems quickly, even if that means allowing some flexibility in functional integrity.

    Haskell, on the other hand, is all about mathematical precision. It enforces a pure programming model. As a result, functions don’t have side effects, and data is immutable by default. This makes it incredibly powerful for provably correct, type-safe programs, but it also comes with a steeper learning curve.

    We would like to clarify that Elixir’s data is also immutable, but it does a great job of hiding that fact. You can “reassign” variables and ostensibly change values, but the data underneath remains unchanged. It’s just that Haskell doesn’t allow that at all.

    Haskell offers:

    • Pure functions – No surprises; given the same input, a function will always return the same output.
    • Static typing with strong guarantees – The type system (with Hindley-Milner inference, monads, and algebraic data types) helps catch errors at compile time instead of run time.
    • Lazy evaluation – Expressions aren’t evaluated until they’re needed, optimising performance but adding complexity.

    Haskell is a language for those who prioritise correctness, mathematical rigour, and abstraction over quick iterations and real-world flexibility. That does not mean it’s slower and inflexible. In fact, experienced Haskellers will use its strong type guarantees to iterate faster, relying on its compiler to catch any mistakes. However, it does contrast with Elixir’s gradual tightening approach. Here, interaction between processes is prioritised, and initial development is quick and flexible, becoming more and more precise as the system evolves.

    Typing: dynamic vs static

    The next significant difference between Elixir and Haskell is how they handle types.

    Elixir is dynamically typed. It doesn’t require explicitly declared variable types; it will infer them at run time. As a result, it’s fast to write and easy to prototype. It allows you to focus on functionality rather than defining types up front.

    Of course, there’s a cost attached to this flexibility. If variables are computed at run time, any errors are also only detected then. Mistakes that could have been caught earlier come up when the code is executed. In a large project, this can make debugging a nightmare.

    For example:

    def add(a, b), do: a + b  
    
    IO.puts add(2, 3)      # Works fine
    IO.puts add(2, "three") # Causes a runtime error
    

    In this example, “three” is a string but should’ve been a number and is going to return an error. Since it doesn’t type check at compile time, the error will only be caught when the function runs.

    Meanwhile, Haskell uses static typing, which means all variable types are checked at compile time. If there’s a mismatch, the code won’t compile. This is very helpful in preventing many classes of bugs before the code execution.

    For example:

    add :: Int -> Int -> Int
    add a b = a + b
    
    main = print (add 2 3)    -- Works fine
    main = print (add 2 "three")  -- Compile-time error
    
    

    Here, the compiler will immediately catch the type mismatch and prevent runtime errors.

    Elixir’s dynamic typing gives you faster iteration and more flexible development. However, it doesn’t rely only on dynamic typing for its robustness. Instead, it follows Erlang’s “Golden Trinity” philosophy, which is:

    • Fail fast instead of trying to prevent all possible errors.
    • Maintain system stability with supervision trees, which automatically restart failed processes.
    • Use the BEAM VM to isolate failures so they don’t crash the system.

    Haskell’s static typing, on the other hand, gives you long-term maintainability and correctness up front. It’s particularly useful in high-assurance software projects, where errors must be kept to a minimum before execution.

    In comparison, Elixir is a popular choice for high-availability systems. Both are highly reliable, but the former is okay with failure and relies on recovery at runtime, whilst the latter enforces correctness at compile-time.

    Concurrency vs parallelism

    When considering Haskell vs Elixir, concurrency is one of the biggest differentiators. Both Elixir and Haskell are highly concurrent but take different approaches to it. Elixir is built for carrying out a massive number of processes simultaneously. In contrast, Haskell gives you powerful—but more manual—tools for parallel execution.

    Elixir manages effortless concurrency with BEAM. The Erlang VM is designed to handle millions of lightweight processes at the same time with high fault tolerance. These lightweight processes follow the actor model principles and are informally called “actors”, although Elixir doesn’t officially use this term.

    Unlike traditional OS threads, these processes are isolated and communicate through message-passing. That means that if one process crashes, BEAM uses supervision trees to restart it automatically while making sure it doesn’t affect the others. This is typical of the ‘let it crash’ philosophy, where failures are expected and handled. There is no expectation to eliminate them entirely.

    As a result, concurrency in Elixir is quite straightforward. You don’t need to manage locks, threads, or shared memory. Load balancing is managed efficiently by the BEAM scheduler across CPU cores, with no manual tuning required.

    Haskell also supports parallelism and concurrency but it requires more explicit management. To achieve this, it uses several concurrency models, including software transactional memory (STM), lazy evaluations, and explicit parallelism to efficiently utilise multicore processors.

    As a result, even though managing parallelism is more hands-on in Haskell, it also leads to some pretty significant performance advantages. For certain workloads, it can be several orders of magnitude faster than Elixir.

    Additionally, Cloud Haskell extends Haskell’s concurrency model beyond a single machine. Inspired by Erlang’s message-passing approach, it allows distributed concurrency across multiple nodes, making Haskell viable for large-scale concurrent systems—not just parallel computations.

    Scaling and parallelism continue to be one of the headaches of distributed programming. Find out what the others are.
    [ Read more ]

    Best-fit workloads

    Both Haskell and Elixir are highly capable, but the kinds of workloads for which they’re suitable are different. We’ve seen how running on the Erlang VM allows Elixir to be more fault-tolerant and support massive concurrency. It can also run processes along multiple nodes for seamless communication.

    Since Elixir can scale horizontally very easily—across multiple machines—it works really well for real-time applications like chat applications, IoT platforms, and financial transaction processing.

    Haskell optimises performance with parallel execution and smart use of system resources.  It doesn’t have BEAM’s actor-based concurrency model but its powerful programming features that allow you to make fine-grained use of multi-core processors more than make up for it.

    It’s perfect for applications where you need heavy numerical computations, granular control over multi-core execution, and deterministic performance.

    So, where Elixir excels at processing high volumes of real-time transactions, Haskell works better for modelling, risk analysis, and regulatory compliance.

    Ecosystem and tooling

    Both Elixir and Haskell have strong ecosystems, but you must have noticed the theme running through our narrative. Yes, both are designed for different industries and development styles.

    Elixir’s ecosystem is practical and industry-focused, with a strong emphasis on web development and real-time applications. It has a growing community and a well-documented standard library, supplemented with production-ready libraries.

    Meanwhile, Haskell has a highly dedicated community in academia, finance, human therapeutics, wireless communications and networking, and compiler development. It offers powerful libraries for mathematical modelling, type safety, and parallel computing. However, tooling can sometimes feel less user-friendly compared to mainstream languages.

    For web development, Elixir offers the Phoenix framework: a high-performance web framework designed for real-time applications, which comes with built-in support for WebSockets and scalability. It follows Elixir’s functional programming principles but keeps development accessible with a syntax inspired by Ruby on Rails.

    Haskell’s Servant framework is a type-safe web framework that leverages the language’s static typing to ensure API correctness. While powerful, it comes with a steeper learning curve due to Haskell’s strict functional nature.

    Which one you should choose depends on your project’s requirements. If you’re looking for general web and backend development, Elixir’s Phoenix is the more practical choice. For research-heavy or high-assurance software, Haskell’s ecosystem provides formal guarantees.

    Maintainability and refactoring

    It’s important to manage technical debt while keeping software maintainable. Part of this is improving quality and future-proofing the code. Elixir’s syntax is clean and intuitive. It offers dynamic typing, meaning you can write code quickly without specifying types. This can make runtime errors harder to track sometimes, but debugging tools like IEx (Interactive Elixir) and Logger make troubleshooting straightforward.

    It’s also easier to refactor because of its dynamic nature and process isolation. Since BEAM isolates processes, refactoring can often be done incrementally without disrupting the rest of the system. This is particularly handy in long-running, real-time applications where uptime is crucial.

    Haskell, on the other hand, enforces strict type safety and a pure functional model, which makes debugging less frequent but more complex. As we mentioned earlier, the compiler catches most issues before runtime, reducing unexpected behaviour.

    However, this strictness means that refactoring in Haskell must be done carefully to maintain type compatibility, module integrity, and scope resolution. Unlike dynamically typed languages, where refactoring is often lightweight, Haskell’s strong type system and module dependencies can make certain refactorings more involved, especially when they affect function signatures or module structures.

    Research on Haskell refactoring highlights challenges like name capture, type signature compatibility, and module-level dependency management, which require careful handling to preserve correctness.

    Then, there’s pattern matching, which both languages use, but do it differently.

    Elixir’s pattern matching is flexible and widely used in function definitions and control flow, making code more readable and expressive.

    Haskell’s pattern matching is type-driven and enforced by the compiler, ensuring exhaustiveness but requiring a more upfront design.

    So, which of the two is easier to maintain?

    Elixir is better suited for fast-moving projects where codebases evolve frequently, thanks to its fault-tolerant design and incremental refactoring capabilities.

    Haskell provides stronger guarantees of correctness, making it a better choice for mission-critical applications where stability outweighs development speed.

    Compilation speed

    One often overlooked difference between Elixir and Haskell is how they handle compilation and code updates.

    Elixir benefits from BEAM’s hot code swapping, where updates can be applied without stopping a running system. Additionally, Elixir compiles faster than Haskell because it doesn’t perform extensive type checking at compile time.

    This speeds up development cycles, which is what makes Elixir well-suited for projects requiring frequent updates and rapid iteration. However, since BEAM uses Just-In-Time (JIT) compilation, some optimisations happen at runtime rather than during compilation.

    Haskell, on the other hand, has a much stricter compilation process. The compiler performs heavy type inference and optimisation, which increases compilation time but results in highly efficient, predictable code.

    Learning curve

    Elixir is often considered easier to learn than Haskell. Its syntax is clean and approachable, especially if you’re coming from Ruby, Python, or JavaScript. The dynamic typing and friendly error messages make it easy to experiment without getting caught up in strict type constraints.

    Haskell, on the other hand, has a notoriously steep learning curve. It requires a shift in mindset, especially for those unfamiliar with pure functional programming, monads, lazy evaluation, and advanced type systems. While it rewards those who stick with it, the initial learning experience can be challenging, even if you’re an experienced developer.

    Metaprogramming

    Both Elixir and Haskell allow you to write highly flexible code, but they take different approaches.

    Elixir provides macros, which you can modify and extend the language at compile time. This makes it easy to generate boilerplate code, create domain-specific languages (DSLs), and build reusable abstractions. However, improper use of macros can make code harder to debug and maintain.

    Haskell doesn’t have macros but compensates with powerful type-level programming. Features like type families and higher-kinded types allow you to enforce complex rules at the type level. This enables incredible flexibility, but it also makes the language even harder to learn.

    Choosing between the two

    As you’ve seen, both Elixir and Haskell can be great, if used correctly in the right circumstances.

    If you do choose Elixir, we’ve got a great resource that discusses how Elixir and Erlang—the language that forms its foundation—can help in future-proofing legacy systems. Find out how their reliability and scalability make them great for modernising infrastructures.

    [ Read more ]

    Want to learn more? Drop the Erlang Solutions team a message.

    The post Elixir vs Haskell: What’s the Difference? appeared first on Erlang Solutions .

    • chevron_right

      Mathieu Pasquet: slixmpp v1.9.1

      news.movim.eu / PlanetJabber • 11 March

    This is mostly a bugfix release over version 1.9.0 .

    The main fix is the rust JID implementation that would behave incorrectly when hashed if the JID contained non-ascii characters. This is an important issue as using a non-ascii JID was mostly broken, and interacting with one failed in interesting ways.

    Fixes

    • The previously mentioned JID hash issue
    • Various edge cases in the roster code
    • One edge case in the MUC ( XEP-0045 ) plugin in join_muc_wait
    • Removed one broken entrypoint from the package
    • Fixed some issues in the MUC Self-Ping ( XEP-0410 ) plugin

    Enhancements

    • Stanza objects now have a __contains__ (used by x in y ) method that allow checking if a plugin is present.
    • The You should catch Iq… exceptions message now includes the traceback
    • The MUC Self-Ping ( XEP-0410 ) plugin allows custom intervals and timeouts for each MUC.
    • Added a STRICT_INTERFACE mode (currently a global var in the stanzabase module) that controls where accessing a non-existing stanza attribute should raise or warn, it previously only warned.
    • The CI does more stuff
    • More type hints here and there

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    • wifi_tethering open_in_new

      This post is public

      blog.mathieui.net /en/slixmpp-1.9.1.html

    • chevron_right

      Erlang Solutions: Understanding Big Data in Healthcare

      news.movim.eu / PlanetJabber • 6 March • 7 minutes

    Healthcare generates large amounts of data every day. From patient records and medical scans to treatment plans and clinical trials. This information, known as big data, has the potential to improve patient care, improve efficiency, and drive innovation. But many organisations are still figuring out how to use it effectively.


    With AI-driven analytics, wearable technology, and real-time monitoring, healthcare providers, insurers, and pharmaceutical companies are using data to make better decisions for patients, personalise treatments, and predict health trends. So how can you do the same?

    Let’s explore the fundamentals of big data in healthcare, its real-world impact and what steps leaders can take to maximise its growing impact.

    What is Big Data?

    Big data refers to the vast amounts of structured and unstructured information from patient records, medical imaging, wearables, and clinical research. Proper analysis can improve patient care, support better decision-making, and reduce costs.

    This data comes from a wide range of sources, including electronic health records (EHRs), test results, diagnoses, medical images, and real-time data from smart wearables. It also includes healthcare-related financial and demographic information. When properly analysed, it helps identify patterns, predict health trends, and support evidence-based decision-making.

    The global healthcare market is expanding quickly and is expected to be worth USD 145.42 billion by 2033. As more organisations adopt AI-driven analytics and machine learning, data is becoming a key driver of innovation, helping healthcare professionals deliver more personalised and effective care.

    The Three V’s of Big Data

    To better understand big data, we can break it down into three key characteristics: volume, velocity, and variety.

    Big Data in Healthcare 3 v's

    1. Volume

    The industry produces massive amounts of data, from electronic health records (EHRs) and medical imaging to clinical research and wearable devices. The total volume of healthcare data doubles every 73 days. Managing this requires advanced storage solutions, such as cloud computing and NoSQL databases , to handle both structured and unstructured data effectively.

    2. Velocity

    Healthcare data is constantly being created. Real-time data streams from patient monitoring systems, wearable technology , and AI-powered diagnostics provide continuous updates. To be useful, this data must be processed instantly, allowing professionals to make fast, informed decisions that support better patient care.

    3. Variety

    Healthcare data comes in many formats, from structured databases to unstructured text, images, videos, and biometric data . Around 80% of healthcare data is unstructured, meaning it doesn’t fit neatly into traditional databases. A patient’s medical history might include lab results, prescriptions, clinician notes, and radiology reports, all in different formats. Integrating and analysing this diverse information is essential for identifying trends and improving treatments.

    Mastering these three V’s helps healthcare organisations make better use of data, leading to more accurate diagnoses, personalised treatments, and improved patient outcomes.

    Key Sources of Healthcare Data

    Now that we’ve discussed the Three V’s , it’s important to explore where this data originates. The primary sources of healthcare data contribute to the massive volumes of information, real-time updates, and diverse formats that we’ve just covered.

    Here are some of the key sources:

    • Electronic Health Records (EHRs) & Medical Records (EMRs) – Digital records containing patient histories, test results, and prescriptions.
    • Wearable Devices & Health Apps – Smartwatches, fitness trackers, and remote monitoring tools that gather real-time health metrics.
    • Medical Imaging & Genomic Data – X-rays, MRIs, and DNA sequencing that assist in diagnostics, research, and precision medicine.
    • Clinical Trials & Research Databases – Data from large-scale studies that drive medical advancements and evidence-based medicine.
    • Public Health & Epidemiological Data – Population health data that track disease trends and guide public health strategies.
    • Hospital Information Systems (HIS) & Administrative Data – Operational data that help manage resources and patient flow within healthcare facilities.

    These sources contribute to the expanding pool of healthcare data, helping organisations make smarter decisions and deliver better care for patients.

    Benefits of Big Data in Healthcare

    As healthcare organisations continue to collect more data, big data is proving to be a game-changer in improving patient care, driving clinical outcomes, and making healthcare more efficient. By analysing vast amounts of information, providers can identify trends and patterns that may have otherwise gone unnoticed. Below are some of the key benefits that big data brings to healthcare, from better patient care to more effective operations.

    Benefit Description Impact
    Improved Patient Care Identifies patterns to predict and prevent diseases, enabling personalised care. Could save the healthcare industry £230 billion to £350 billion annually through improved care and efficiency.
    Cost Reduction Optimises resource allocation, reduces waste, and improves efficiency. Predictive analytics can cut hospital readmissions by up to 20% , leading to significant savings.
    Enhanced Clinical Outcomes Integrates data to identify the most effective treatments for patients. Improves clinical decision-making with real-time insights and evidence-based recommendations.
    Accelerated Medical Research Offers large datasets for faster analysis, cutting clinical trial time and costs. Reduces c linical trial times by 30% and associated costs by 50%.
    Predictive Analytics Forecasts patient needs, improving outcomes and reducing readmissions. Helps optimise resources and reduce readmission rates, improving care and reducing costs.
    Precision Medicine Tailors treatments based on individual characteristics like genetics. Big Data enables more targeted and effective treatment plans.
    Population Health Management Identifies at-risk populations for targeted interventions. Reduces the prevalence of chronic diseases through early detection and personalised care.
    Operational Efficiency Improves processes like inventory management and reduces waste. Enhances resource management, reduces costs, and improves service delivery.

    Data Privacy and Security in Healthcare

    While big data enhances patient care and efficiency, it also brings critical data security challenges. IBM’s 2024 Cost of a Data Breach report highlights the average healthcare breach costs $9.77 million. Protecting patient data is crucial for maintaining trust and avoiding risks.

    Understanding Big Data in Healthcare stats

    Source: Cost of Data Breach Report, IBM

    Key Challenges in Healthcare Data Security

    Several issues make healthcare data security more difficult:

    Challenge Details
    Outdated Systems Older systems may have security gaps that hackers can exploit.
    Weak Passwords Simple or reused passwords make it easier for unauthorised people to access sensitive data.
    Internal Threats Employees or contractors could accidentally or intentionally compromise data security.
    Mobile and Cloud Security As healthcare uses more mobile devices and cloud storage, keeping data safe across different platforms becomes harder.

    With so much data being collected and shared, these challenges are becoming more complex, making it crucial to stay on top of security measures.

    Regulatory Framework: HIPAA and Beyond

    In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) sets the rules for protecting patient data. While HIPAA covers the basics, healthcare organisations need to stay on top of evolving security threats and regulations as technology changes.

    Besides HIPAA, other important regulations include the HITECH Act , which supports the use of electronic health records (EHRs) and strengthens privacy protections, and the General Data Protection Regulation (GDPR) in the European Union, which controls how personal data is used and gives patients more control over their information.

    In our previous blog, The Golden Age of Data in Healthcare , we touched on the challenges that come with using new technologies like blockchain. While blockchain offers secure data storage, it also raises concerns around data ownership and staying compliant with rules like HIPAA and GDPR.

    Solutions to Enhance Healthcare Data Security

    To better protect patient data, healthcare organisations should implement:

    • Data Encryption : Keeps data secure even if intercepted.
    • Multi-Factor Authentication (MFA) : Adds an extra layer of security by requiring more than just a password.
    • System Monitoring and Threat Detection : Monitoring systems for unusual activity helps quickly spot potential breaches.
    • Employee Training : Teaching staff about security best practices and how to spot phishing attempts helps reduce risks.

    By following clear security measures and meeting regulatory requirements, organisations can prevent breaches and keep patient trust intact.

    Enhancing Healthcare Security with Erlang, Elixir, and SAFE

    As we’ve seen, healthcare faces ongoing security challenges such as outdated systems, weak passwords, internal threats, and securing mobile and cloud data. Erlang and Elixir , by their very nature, offer solutions to these problems.

    • Outdated systems: Erlang and Elixir are built for high availability and fault tolerance, ensuring critical healthcare systems remain operational without the risk of system failures, even when legacy infrastructure is involved.
    • Weak passwords & internal threats: Both technologies provide process isolation and robust concurrency, limiting the impact of internal threats and reducing the risk of unauthorised access.
    • Mobile and cloud security: With Erlang and Elixir’s scalability and resilience, securing data across mobile platforms and cloud environments becomes easier, supporting secure, seamless data exchanges.

    To further bolster security, SAFE (Security Audit for Erlang/Elixir) helps healthcare providers identify vulnerabilities in their systems. This service:

    • Identifies vulnerabilities in code that could expose systems to attacks.
    • Assesses risk levels to prioritise fixes.
    • Provides detailed reports that outline specific issues and solutions.

    By combining the inherent security benefits of Erlang and Elixir with the proactive audit capabilities of SAFE, healthcare organisations can safeguard patient data, reduce risk, and stay compliant with regulations like HIPAA.

    Conclusion

    Big data is transforming healthcare by improving patient care and outcomes. However, with this growth comes the need to secure sensitive data and ensure compliance with regulations like HIPAA and GDPR.

    Erlang and Elixir naturally address key security challenges, helping organisations protect patient information. Tools like SAFE identify vulnerabilities, reduce risks, and ensure compliance.

    Ultimately, securing patient data is critical for maintaining trust and delivering quality care. If you would like to talk more about securing your systems or staying compliant, contact our team.

    The post Understanding Big Data in Healthcare appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Top 5 IoT Business Security Basics

      news.movim.eu / PlanetJabber • 27 February • 9 minutes

    IoT is now a fundamental part of modern business. With more than 17 billion connected devices worldwide, IoT business security is more important than ever. A single breach can expose sensitive data, disrupt operations, and damage a company’s reputation.

    To help safeguard your business, we’ll cover five essential IoT security basics: data encryption, strong password policies, regular security audits, employee awareness training, and disabling unnecessary features.

    1) Secure password practices

    Weak passwords make IoT devices susceptible to unauthorised access, leading to data breaches, privacy violations and increased security risks. When companies install devices, without changing default passwords or by creating oversimplified ones, they create a gateway entry point for attackers. Implementing strong and unique passwords can ensure the protection of these potential threats.

    Password managers

    Each device in a business should have its own unique password that should change on a regular basis. According to the 2024 IT Trends Report by JumpCloud, 83% of organisations surveyed use password-based authentication for some IT resources.

    Consider using a business-wide password manager to store your passwords securely, and that allows you to use unique passwords across multiple accounts.

    Password managers are also incredibly important as they:

    • Help to spot fake websites, protecting you from phishing scams and attacks.
    • Allow you to synchronise passwords across multiple devices, making it easy and safe to log in wherever you are.
    • Track if you are re-using the same password across different accounts for additional security.
    • Spot any password changes that could appear to be a breach of security.

    Multi-factor authentication (MFA)

    Multi-factor authentication (MFA) adds an additional layer of security. It requires additional verification beyond just a password, such as SMS codes, biometric data or other forms of app-based authentication. You’ll find that many password managers offer built-in MFA features for enhanced security.

    Some additional security benefits include:

    • Regulatory compliance
    • Safeguarding without password fatigue
    • Easily adaptable to a changing work environment
    • An extra layer of security compared to two-factor authentication (2FA)

    As soon as an IoT device becomes connected to a new network, it is strongly recommended that you reset any settings with a secure, complex password. Using password managers allows you to generate unique passwords for each device to secure your IoT endpoints optimally.

    2) Data encryption at every stage

    Why is data encryption so necessary? With the increased growth of connected devices, data protection is a growing concern. In IoT, sensitive information (personal data, financial, location etc) is vulnerable to cyber-attacks if transmitted over public networks. When done correctly, data encryption renders personal data unreadable to those who don’t have outside access. Once that data is encrypted, it becomes safeguarded, mitigating unnecessary risks.

    IoT security data encryption

    Additional benefits to data encryption

    How to encrypt data in IoT devices

    There are a few data encryption techniques available to secure IoT devices from threats. Here are some of the most popular techniques:

    Triple Data Encryption Standard (Triple DES): Uses three rounds of encryption to secure data, offering a high-level of security used for mission-critical applications.

    Advanced Encryption Standard (AES) : A commonly used encryption standard, known for its high security and performance. This is used by the US federal government to protect classified information.

    Rivest-Shamir-Adleman (RSA): This is based on public and private keys, used for secure data transfer and digital signatures.

    Each encryption technique has its strengths, but it is crucial to choose what best suits the specific requirements of your business.

    Encryption support with Erlang/Elixir

    When implementing data encryption protocols for IoT security, Erlang and Elixir offer great support to ensure secure communication between IoT devices. We go into greater detail about IoT security with Erlang and Elixir in a previous article, but here is a reminder of the capabilities that make them ideal for IoT applications:

    1. Concurrent and fault-tolerant nature: Erlang and Elixir have the ability to handle multiple concurrent connections and processes at the same time. This ensures that encryption operations do not bottleneck the system, allowing businesses to maintain high-performing, reliable systems through varying workloads.
    2. Built-in libraries: Both languages come with powerful libraries, providing effective tools for implementing encryption standards, such as AES and RSA.
    3. Scalable: Both systems are inherently scalable, allowing for secure data handling across multiple IoT devices.
    4. Easy integration: The syntax of Elixir makes it easier to integrate encryption protocols within IoT systems. This reduces development time and increases overall efficiency for businesses.

    Erlang and Elixir can be powerful tools for businesses, enhancing the security of IoT devices and delivering high-performance systems that ensure robust encryption support for peace of mind.

    3) Regular IoT inventory audits

    Performing regular security audits of your systems can be critical in protecting against vulnerabilities. Keeping up with the pace of IoT innovation often means some IoT security considerations get pushed to the side. But identifying weaknesses in existing systems allows organisations to implement a much-needed strategy.

    Types of IoT security testing

    We’ve explained how IoT audits are key in maintaining secure systems. Now let’s take a look at some of the common types of IoT security testing options available:

    IoT security testing

    IoT security testing types

    Firmware software analysis

    Firmware analysis is a key part of IoT security testing. It explores the firmware, the core software embedded into the IoT hardware of IoT products (routers, monitors etc). Examining the firmware means security tests can identify any system vulnerabilities, that might not be initially apparent. This improves the overall security of business IoT devices.

    Threat modelling

    In this popular testing method, security professionals create a checklist based on potential attack methods, and then suggest ways to mitigate them. This ensures the security of systems by offering analysis of necessary security controls.

    IoT penetration testing

    This type of security testing finds and exploits security vulnerabilities in IoT devices. IoT penetration testing is used to check the security of real-world IoT devices, including the entire ecosystem, not just the device itself.

    Incorporating these testing methods is essential to help identify and mitigate system vulnerabilities. Being proactive and addressing these potential security threats can help businesses maintain secure IoT infrastructure, enhancing operational efficiency and data protection.

    4) Training and educating your workforce

    Employees can be an entry point for network threats in the workplace.

    The time of BYOD (bring your own devices) where an employee’s work supplies would consist of their laptops, tablets and smartphones in the office to assist with their tasks, is long gone. Now, personal IoT devices are also used in the workplace. Think of your popular wearables like smartwatches, fitness trackers, e-readers and portable game consoles. Even portable appliances like smart printers and smart coffee makers are increasingly popular in office spaces.

    Example of increasing IoT devices in the office. Source: House of IT

    The use of various IoT devices throughout your business network is the most vulnerable target for cybercrime, using techniques such as phishing and credential hacking or malware.

    Phishing attempts are among the most common. Even the most ‘tech-savvy’ person can fall victim to them. Attackers are skilled at making phishing emails seem legitimate, forging real domains and email addresses to appear like a legitimate business.

    Malware is another popular technique concealed in email attachments, sometimes disguised as Microsoft documents, unassuming to the recipient.

    Remote working and IoT business security

    Threat or malicious actors are increasingly targeting remote workers. Research by Global Newswire shows that remote working increases the frequency of cyber attacks by a staggering 238%.

    The nature of remote employees housing sensitive data on various IoT devices makes the need for training even more important. There is now a rise in companies moving to secure personal IoT devices that are used for home working, with the same high security as they would corporate devices.

    How are they doing this? IoT management solutions. They provide visibility and control over other IoT devices. Key players across the IoT landscape are creating increasingly sophisticated IoT management solutions, helping companies administer and manage relevant updates remotely.

    The use of IoT devices is inevitable if your enterprise has a remote workforce.

    Regular remote updates for IoT devices are essential to ensure the software is up-to-date and patched. But even with these precautions, you should be aware of IoT device security risks and take steps to mitigate them.

    Importance of IoT training

    Getting employees involved in the security process encourages awareness and vigilance for protecting sensitive network data and devices.

    Comprehensive and regularly updated education and training are vital to prepare end-users for various security threats. Remember that a business network is only as secure as its least informed or untrained employee.

    Here are some key points employees need to know to maintain IoT security :

    • The best practices for security hygiene (for both personal and work devices and accounts).
    • Common and significant cybersecurity risks to your business.
    • The correct protocols to follow if they suspect they have fallen victim to an attack.
    • How to identify phishing, social engineering, domain spoofing, and other types of attacks.

    Investing the time and effort to ensure your employees are well informed and prepared for potential threats can significantly enhance your business’s overall IoT security standing.

    5) Disable unused features to ensure IoT security

    Enterprise IoT devices come with a range of functionalities. Take a smartwatch, for example. Its main purpose as a watch is of course to tell the time, but it might also include Bluetooth, Near-Field Communication (NFC), and voice activation. If you aren’t using these features, then you’re opening yourself up for hackers to potentially breach your device. Deactivation of unused features reduces the risk of cyberattacks, as it limits the ways for hackers to breach these devices.

    Benefits of disabling unused features

    If these additional features are not being used, they can create unnecessary security vulnerabilities. Disabling unused features helps to ensure IoT security for businesses in several ways:

    1. Reduces attack surface : Unused features provide extra entry points for attackers. Disabling features limits the number of potential vulnerabilities that could be exploited, in turn reducing attacks overall.
    2. Minimises risk of exploits : Many IoT devices come with default settings that enable features which might not be necessary for business operations. Disabling these features minimises the risk of weak security.
    3. Improves performance and stability : Unused features can consume resources and affect the performance and stability of IoT devices. By disabling them, devices run more efficiently and are less likely to experience issues that could be exploited by attackers.
    4. Simplifies security management : Managing fewer active features simplifies security oversight. It becomes simpler to monitor and update any necessary features.
    5. Enhances regulatory compliance : Disabling unused features can help businesses meet regulatory requirements by ensuring that only the necessary and secure functionalities are active.

    To conclude

    The continued adoption of IoT is not stopping anytime soon. Neither are the possible risks. Implementing even some of the five tips we have highlighted can significantly mitigate the risks associated with the growing number of devices used for business operations.

    Ultimately, investing in your business’s IoT security is all about safeguarding the entire network, maintaining the continuity of day-to-day operations and preserving the reputation of your business. Want to learn more about keeping your IoT offering secure? Don’t hesitate to drop the Erlang Solutions team a message.

    The post Top 5 IoT Business Security Basics appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/top-5-tips-to-ensure-iot-security-for-your-business/

    • chevron_right

      Erlang Solutions: Highlights from CodeBEAM Lite London

      news.movim.eu / PlanetJabber • 20 February • 6 minutes

    The inaugural CodeBEAM Lite London conference was held at CodeNode last month, featuring 10 talks, 80 attendees, and an Erlang Solutions booth. There, attendees had the chance to set a high score in a BEAM-based asteroid game created by ESL’s Hernan Rivas Acosta, and win an Atari replica.

    Learning from and networking with experts across the BEAM world was an exciting opportunity. Here are the highlights from the talks at the event.

    Keynote: Gleam’s First Year

    Louis Pilfold kicked things off with an opening keynote all about Gleam , the statically-typed BEAM language he designed and developed, and which announced its version 1.0 a year ago at FOSDEM in Brussels.

    Louis laid out the primary goals of v1: productivity and sustainability, avoiding breaking changes and language bloat, and extensive, helpful, and easily navigable documentation. He then walked us through some of the progress made on Gleam in its first year of official release, with a particular focus on the many convenience and quality-of-life features of the language server, written in Rust. Finally, he measured Gleam’s success throughout 2024 in terms of Github usage and sponsorship money and looked forward to his goals for the language in 2025.

    The Art of Writing Beautiful Code

    “Make it work, then make it beautiful, then if you really, really have to, make it fast. 90 per cent of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!” Most of us are likely familiar with this famous Joe Armstrong quote, but what does it actually mean to write beautiful code?

    This question was the focus of Brujo Benavides’ talk, a tour through various examples of “ugly” code in Erlang, some of which may well be considered beautiful by programmers trying to avoid repeating code. If beauty is in the eye of the beholder, what’s more important is that each project has a consistent definition of what “beautiful” means. Brujo explored different methods of achieving this consistency, and how to balance it with the need for fast commits of important changes in a project.

    Why Livebook is My Dream Data Science Workbench

    Amplified ’s Christopher Grainger took a more cerebral approach to his talk on Livebook, drawing on his background as both a historian and a data scientist to link the collaborative notebook software to a tradition of scientific collaboration dating back thousands of years.

    In his view, the fragmentation of the digital age led to key components of this tradition being lost; he explored how LiveBook’s BEAM architecture brings it closer to being a digital equivalent of real-time collaboration in a lab than prior technologies like Jupyter Notebooks, and what further steps could be taken to get even closer to it in the future.

    Deploying Elixir on Azure With Some Bonus Side Quests

    Matteo Gheri of Pocketworks provided an industrial example of Elixir in action, explaining how his company used Azure in the course of building a Phoenix app for UK-based taxi company Veezu.

    Azure is used to host only 3.2% of Elixir apps, and Matteo walked through their journey figuring it out in detail, touching on deployment, infrastructure, CI/CD, and the challenges they encountered.

    Let’s Talk About Tests

    Erlang Solutions’ own Natalia Chechina took the stage next for a dive into the question of tests. She explored ways of convincing managers of the importance of testing, which types of test to prioritise depending on the circumstances of the project, and how to best structure testing in order to prevent developers from burning out, stressing the importance of both making testing a key component of the development cycle and cultivating a positive attitude towards testing.

    Eat Your Greens: A Philosophy for Language D esign

    Replacing Guillaume Duboc’s cancelled talk on Elixir types was Peter Saxton, developer of a new language called Eat Your Greens (EYG). The philosophy behind the title refers to doing things that may be boring or unenjoyable but which lead to benefits in the long run, such as eating vegetables; Peter cited types as an example of this, and as such EYG is statically, structurally, and soundly typed. He then walked through other main features of his language, such as closure serialisation as JSON, hot code reloading, and the ability for it to be run entirely through keyboard shortcuts.

    Trade-Offs Using JSON in Elixir 1.18: Wrappers vs. Erlang L ibraries

    Michał Muskała has a long history working with JSON on the BEAM, starting with his development of the Jason parser and generator, first released in 2017. He talked us through that history; writing Jason, turning his focus to Erlang/OTP and proposing a JSON module there, and then building on that for the Elixir JSON module, now part of the standard library in 1.18.

    He discussed the features of this new module, why it was better to use wrappers while transitioning to Elixir instead of calling Erlang directly, and how to simplify migration from Jason to JSON in advance of OTP 27 eventually being required by Elixir.

    Distributed AtonVM: Let’s Create Clusters of Microcontrollers

    A useless machine and a tiny, battery-free LED device played central roles in Paul Guyot’s dive into AtomVM , an Erlang- and Elixir-based virtual machine for microcontrollers. He kicked off by demonstrating La machine, the first commercial AtomVM product, albeit without an internet connection, before explaining AtomVM’s intended use in IoT devices, and the recent addition of distributed Erlang. This was backed up by another demonstration, this time of the appropriately named “2.5g of Erlang” device. Finally, he explained AtomVM’s advantages compared to other IoT VMs and identified the next steps for the project.

    Erlang and RabbitMQ: The Erlang AMQP Client in Action

    Katleho Kanyane from Erlang Solutions then provided another industry use case, discussing how he helped to implement a RabbitMQ publisher using the Erlang AMQP client library while working with a large fintech client. Katleho talked through some of the basics of RabbitMQ implementation, best practices, and two issues he ran into involving flow control, an overload prevention feature in RabbitMQ that throttles components and leads to drastically reduced transfer rates. He wrapped up by discussing lessons he learned from the process and laying out a few guidelines for designing a publisher.

    Keynote: Introducing Tau5 – A New BEAM-Powered Live Coding Platform

    The closing keynote was also the only talk of the day to kick off with a music video, though that should be expected when live coding artist and Sonic Pi creator Sam Aaron is the one delivering it. Sam spoke passionately about his goal to make programming something that everyone should be able to try without needing or wanting to become a professional and discussed his history of using Sonic Pi’s live coding software in education, including how he worked some complicated concepts such as concurrency in without confusing students or teachers.

    He then discussed the limitations of Sonic Pi and how they are addressed by his new project, Tau5. While still in the proof-of-concept stage, Tau5 improves on Sonic Pi by being built on OTP from the ground up, being able to run in the browser, and including new features like visuals to add to live performances. He concluded with a demonstration of Tau5 and an explanation of his intentions for the project.

    Final Thoughts

    CodeBEAM Lite London 2025 was a fantastic day filled with fascinating talks, cool demos, and plenty more to excite any BEAM enthusiast. From hearing about the latest Gleam developments to diving into live coding with Tau5, it was clear that the community is thriving and full of creative energy. Whether it was learning tips for practical BEAM use or exploring cutting-edge new tools and languages, there was something for everyone.

    If you missed out this time, don’t worry: you’ll be welcome at the next one, and we hope to see you there. Until then, keep building, keep experimenting, and above all keep having fun with the BEAM!

    The post Highlights from CodeBEAM Lite London appeared first on Erlang Solutions .