call_end

    • chevron_right

      Prosodical Thoughts: Prosody 13.0.0 released!

      news.movim.eu / PlanetJabber • 17 March • 7 minutes

    Welcome to a new major release of the Prosody XMPP server! While the 0.12 branch has served us well for a while now, this release brings a bunch of new features we’ve been busy polishing.

    If you’re unfamiliar with Prosody, it’s an open-source project that implements XMPP , an open standard protocol for online communication. Prosody is widely used to power everything from small self-hosted messaging servers to worldwide real-time applications such as Jitsi Meet. It’s part of a large ecosystem of compatible software that you can use for realtime online communication.

    Before we begin…

    The first thing anyone who has been following the project for a while will notice about this release is the version number.

    Long adherents of the cult of 0ver , we finally decided it was time to break away. While, as Shakespeare wrote, “That which we call a rose, by any other name would smell as sweet”, such is true of version numbers. Prosody has been stable and used in production deployments for many years, however the ‘0.x’ version number occasionally misled people to believe otherwise. Apart from shifting the middle component leftwards, nothing has changed.

    If you’re really curious, you can read full details in our versioning and support policy .

    Our version numbers have also been in step with Debian’s for several versions now. Could this become a thing? Maybe!

    Overview of changes

    This release brings a wide range of improvements that make Prosody more secure, performant, and easier to manage than ever before. Let’s review the most significant changes that administrators and users can look forward to across a range of different topics.

    Security and authentication

    Security takes centre stage in this release with several notable improvements. Building on DNSSEC, the addition of full DANE support for server-to-server connections strengthens the trust between federating XMPP servers.

    We’ve enhanced our support for channel binding, which is now compatible with TLS 1.3, and we’ve added support for XEP-0440 which helps clients know which channel binding methods the server supports. Channel binding protects your connection from certain machine-in-the-middle attacks, even if the server’s TLS certificate is compromised.

    Account management

    Administrators now have more granular control over user accounts with the ability to disable and enable them as needed. This can be particularly useful for public servers, where disabling an account can act as a reversible alternative to deletion.

    In fact, we now have the ability to set a grace period for deleted accounts to allow restoring an account (within the grace period) in case of accidental deletion.

    Roles and permissions

    A new role and permissions framework provides more flexible access control. Prosody supplies several built-in roles:

    • prosody:operator - for operators of the whole Prosody instance. By default, accounts with this role have full access, including to operations that affect the whole server.
    • prosody:admin - the usual role for admins of a specific virtual host (or component). Accounts with this role have permission to manage user accounts and various other aspects of the domain.
    • prosody:member - this role is for “normal” user accounts, but specifically those ones which are trusted to some extent by the administrators. Typically accounts that are created through an invitation or through manual provisioning by the admin have this role.
    • prosody:registered - this role is also for general user accounts, but is used by default for accounts which registered themselves, e.g. if the server has in-band registration enabled.
    • prosody:guest - finally, the “guest” role is used for temporary/anonymous accounts and is also the default for remote JIDs interacting with the server.

    For more details about how to use these roles, customize permissions, and more, read our new roles and permissions documentation . You will also find the link there for the development documentation, so module developers can make use of the new system.

    Shell commands

    Since the earliest releases, the prosodyctl command has been the admin’s primary way of managing and interacting with Prosody. In 0.12 we introduced the prosodyctl shell interface to send administrative commands to Prosody at runtime via a local connection. It has been a big success, and this release significantly extends its capabilities.

    • prosodyctl adduser/passwd/deluser commands now use the admin connection to create users, which improves compatibility with various storage and authentication plugins. It also ensures Prosody can instantly respond to changes, such as immediately disconnecting users when their account is deleted.
    • Pubsub management commands have been added, to create/configure/delete nodes and items on pubsub and PEP services without needing an XMPP client.
    • One of our own favourites as Prosody developers is the new prosodyctl shell watch log command, which lets you stream debug logs in real-time without needing to store them on the filesystem.
    • Similarly, there is now prosodyctl shell watch stanzas which lets you monitor stanzas to/from arbitrary JIDs, which is incredibly helpful for developers trying to diagnose various client issues.
    • Server-wide announcements can now be sent via the shell, optionally limiting the recipients by online status or role.
    • MUC has gained a few new commands for interacting with MUC rooms.

    Improved Multi-User Chat (MUC) Management

    The MUC system has received a significant overhaul focusing on security and administrative control. By default, room creation is now restricted to local users, providing better control over who can create persistent and public rooms.

    Server administrators get new shell commands to inspect room occupants and affiliations, making day-to-day operations more straightforward.

    One notable change is that component admins are no longer automatically owners. This can be reverted to the old behaviour with component_admins_as_room_owners = true in the config, but this has known incompatibilities with some clients. Instead, admins can use the shell or ad-hoc commands to gain ownership of rooms when it’s necessary.

    Better Network Performance

    Network connectivity sees substantial improvements with the implementation of RFC 8305’s “Happy Eyeballs” algorithm, which enhances IPv4/IPv6 dual-stack performance and increases the chance of a successful connection.

    Support for TCP Fast Open and deferred accept capabilities (in the server_epoll backend) can potentially reduce connection latency.

    The server now also better handles SRV record selection by respecting the ‘weight’ parameter, leading to more efficient connection distribution.

    Storage and Performance Improvements

    Under the hood, Prosody now offers better query performance with its internal archive stores by generating indexes.

    SQLite users now have the option to use LuaSQLite3 instead of LuaDBI, potentially offering better performance and easier deployment.

    We’ve also added compatibility with SQLCipher , a fork of SQLite that adds support for encrypted databases.

    Configuration Improvements

    The configuration system has been modernized to support referencing and appending to previously set options, making complex configurations more manageable.

    While direct Lua API usage in the config file is now deprecated, it remains accessible through the new Lua.* namespace for those who need it.

    Also new in this release is the ability to reference credentials or other secrets in the configuration file, without storing them in the file itself. It is compatible with the credentials mechanisms supported by systemd , podman and more.

    Developer/API changes

    The development experience has always been an important part of our project - we set out to make an XMPP server that was very easy to extend and customize. Our developer API has improved with every release. We’ve even had first-time coders write Prosody plugins!

    There are too many improvements to list here, but some notable ones:

    • Storage access from modules has been simplified with a new ‘keyval+’ store type, which combines the old ‘keyval’ (default) and ‘map’ stores into a single interface. Before this change, many modules had to open the store twice to utilize the two APIs.
    • Any module can now replace custom permission handling with Prosody’s own permission framework via the simple module:may() API call.
    • Providing new commands for prosodyctl shell is now much easier for module developers.

    Backwards compatibility is of course generally preserved, although is_admin() has been deprecated in favour of module:may() . Modules that want to remain compatible with older versions can use mod_compat_roles to enable (limited) permission functionality.

    Important Notes for Upgrading

    A few breaking changes are worth noting:

    • Lua 5.1 support has been removed (this also breaks compatibility with LuaJIT, which is based primarily on Lua 5.1).
    • Some MUC default behaviors have changed regarding room creation and admin permissions (see above).

    Conclusion

    We’re very excited about this release, which represents a significant step forward for Prosody, and contains improvements for virtually every aspect of the server. From enhanced security to better performance and more flexible administration tools, there has never been a better time to deploy Prosody and take control of your realtime communications.

    As always, if you have any problems or questions with Prosody or the new release, drop by our community chat !

    • wifi_tethering open_in_new

      This post is public

      blog.prosody.im /prosody-13.0.0-released/

    • chevron_right

      Erlang Solutions: Elixir vs Haskell: What’s the Difference?

      news.movim.eu / PlanetJabber • 13 March • 11 minutes

    Elixir and Haskell are two very powerful, very popular programming languages. However, each has its strengths and weaknesses. Whilst they are similar in a few ways, it’s their differences that make them more suitable for certain tasks.

    Here’s an Elixir vs Haskell comparison.

    Elixir vs Haskell: a comparison

    Core philosophy and design goals

    Starting at a top-level view of both languages, the first difference we see is in their fundamental philosophies. Both are functional languages. However, their design choices reflect very different priorities.

    Elixir is designed for the real world. It runs on the Erlang VM (BEAM), which was built to handle massive concurrency, distributed systems, and applications that can’t afford downtime, like telecoms, messaging platforms, and web apps.

    Elixir prioritises:

    • Concurrency-first – It uses lightweight processes and message passing to make scalability easier.
    • Fault tolerance – It follows a “Let it crash” philosophy to ensure failures don’t take down the whole system.
    • Developer-friendly – Its Ruby-like syntax makes functional programming approachable and readable.

    Elixir is not designed for theoretic rigidness—it’s practical. It gives you the tools you need to build robust, scalable systems quickly, even if that means allowing some flexibility in functional integrity.

    Haskell, on the other hand, is all about mathematical precision. It enforces a pure programming model. As a result, functions don’t have side effects, and data is immutable by default. This makes it incredibly powerful for provably correct, type-safe programs, but it also comes with a steeper learning curve.

    We would like to clarify that Elixir’s data is also immutable, but it does a great job of hiding that fact. You can “reassign” variables and ostensibly change values, but the data underneath remains unchanged. It’s just that Haskell doesn’t allow that at all.

    Haskell offers:

    • Pure functions – No surprises; given the same input, a function will always return the same output.
    • Static typing with strong guarantees – The type system (with Hindley-Milner inference, monads, and algebraic data types) helps catch errors at compile time instead of run time.
    • Lazy evaluation – Expressions aren’t evaluated until they’re needed, optimising performance but adding complexity.

    Haskell is a language for those who prioritise correctness, mathematical rigour, and abstraction over quick iterations and real-world flexibility. That does not mean it’s slower and inflexible. In fact, experienced Haskellers will use its strong type guarantees to iterate faster, relying on its compiler to catch any mistakes. However, it does contrast with Elixir’s gradual tightening approach. Here, interaction between processes is prioritised, and initial development is quick and flexible, becoming more and more precise as the system evolves.

    Typing: dynamic vs static

    The next significant difference between Elixir and Haskell is how they handle types.

    Elixir is dynamically typed. It doesn’t require explicitly declared variable types; it will infer them at run time. As a result, it’s fast to write and easy to prototype. It allows you to focus on functionality rather than defining types up front.

    Of course, there’s a cost attached to this flexibility. If variables are computed at run time, any errors are also only detected then. Mistakes that could have been caught earlier come up when the code is executed. In a large project, this can make debugging a nightmare.

    For example:

    def add(a, b), do: a + b  
    
    IO.puts add(2, 3)      # Works fine
    IO.puts add(2, "three") # Causes a runtime error
    

    In this example, “three” is a string but should’ve been a number and is going to return an error. Since it doesn’t type check at compile time, the error will only be caught when the function runs.

    Meanwhile, Haskell uses static typing, which means all variable types are checked at compile time. If there’s a mismatch, the code won’t compile. This is very helpful in preventing many classes of bugs before the code execution.

    For example:

    add :: Int -> Int -> Int
    add a b = a + b
    
    main = print (add 2 3)    -- Works fine
    main = print (add 2 "three")  -- Compile-time error
    
    

    Here, the compiler will immediately catch the type mismatch and prevent runtime errors.

    Elixir’s dynamic typing gives you faster iteration and more flexible development. However, it doesn’t rely only on dynamic typing for its robustness. Instead, it follows Erlang’s “Golden Trinity” philosophy, which is:

    • Fail fast instead of trying to prevent all possible errors.
    • Maintain system stability with supervision trees, which automatically restart failed processes.
    • Use the BEAM VM to isolate failures so they don’t crash the system.

    Haskell’s static typing, on the other hand, gives you long-term maintainability and correctness up front. It’s particularly useful in high-assurance software projects, where errors must be kept to a minimum before execution.

    In comparison, Elixir is a popular choice for high-availability systems. Both are highly reliable, but the former is okay with failure and relies on recovery at runtime, whilst the latter enforces correctness at compile-time.

    Concurrency vs parallelism

    When considering Haskell vs Elixir, concurrency is one of the biggest differentiators. Both Elixir and Haskell are highly concurrent but take different approaches to it. Elixir is built for carrying out a massive number of processes simultaneously. In contrast, Haskell gives you powerful—but more manual—tools for parallel execution.

    Elixir manages effortless concurrency with BEAM. The Erlang VM is designed to handle millions of lightweight processes at the same time with high fault tolerance. These lightweight processes follow the actor model principles and are informally called “actors”, although Elixir doesn’t officially use this term.

    Unlike traditional OS threads, these processes are isolated and communicate through message-passing. That means that if one process crashes, BEAM uses supervision trees to restart it automatically while making sure it doesn’t affect the others. This is typical of the ‘let it crash’ philosophy, where failures are expected and handled. There is no expectation to eliminate them entirely.

    As a result, concurrency in Elixir is quite straightforward. You don’t need to manage locks, threads, or shared memory. Load balancing is managed efficiently by the BEAM scheduler across CPU cores, with no manual tuning required.

    Haskell also supports parallelism and concurrency but it requires more explicit management. To achieve this, it uses several concurrency models, including software transactional memory (STM), lazy evaluations, and explicit parallelism to efficiently utilise multicore processors.

    As a result, even though managing parallelism is more hands-on in Haskell, it also leads to some pretty significant performance advantages. For certain workloads, it can be several orders of magnitude faster than Elixir.

    Additionally, Cloud Haskell extends Haskell’s concurrency model beyond a single machine. Inspired by Erlang’s message-passing approach, it allows distributed concurrency across multiple nodes, making Haskell viable for large-scale concurrent systems—not just parallel computations.

    Scaling and parallelism continue to be one of the headaches of distributed programming. Find out what the others are.
    [ Read more ]

    Best-fit workloads

    Both Haskell and Elixir are highly capable, but the kinds of workloads for which they’re suitable are different. We’ve seen how running on the Erlang VM allows Elixir to be more fault-tolerant and support massive concurrency. It can also run processes along multiple nodes for seamless communication.

    Since Elixir can scale horizontally very easily—across multiple machines—it works really well for real-time applications like chat applications, IoT platforms, and financial transaction processing.

    Haskell optimises performance with parallel execution and smart use of system resources.  It doesn’t have BEAM’s actor-based concurrency model but its powerful programming features that allow you to make fine-grained use of multi-core processors more than make up for it.

    It’s perfect for applications where you need heavy numerical computations, granular control over multi-core execution, and deterministic performance.

    So, where Elixir excels at processing high volumes of real-time transactions, Haskell works better for modelling, risk analysis, and regulatory compliance.

    Ecosystem and tooling

    Both Elixir and Haskell have strong ecosystems, but you must have noticed the theme running through our narrative. Yes, both are designed for different industries and development styles.

    Elixir’s ecosystem is practical and industry-focused, with a strong emphasis on web development and real-time applications. It has a growing community and a well-documented standard library, supplemented with production-ready libraries.

    Meanwhile, Haskell has a highly dedicated community in academia, finance, human therapeutics, wireless communications and networking, and compiler development. It offers powerful libraries for mathematical modelling, type safety, and parallel computing. However, tooling can sometimes feel less user-friendly compared to mainstream languages.

    For web development, Elixir offers the Phoenix framework: a high-performance web framework designed for real-time applications, which comes with built-in support for WebSockets and scalability. It follows Elixir’s functional programming principles but keeps development accessible with a syntax inspired by Ruby on Rails.

    Haskell’s Servant framework is a type-safe web framework that leverages the language’s static typing to ensure API correctness. While powerful, it comes with a steeper learning curve due to Haskell’s strict functional nature.

    Which one you should choose depends on your project’s requirements. If you’re looking for general web and backend development, Elixir’s Phoenix is the more practical choice. For research-heavy or high-assurance software, Haskell’s ecosystem provides formal guarantees.

    Maintainability and refactoring

    It’s important to manage technical debt while keeping software maintainable. Part of this is improving quality and future-proofing the code. Elixir’s syntax is clean and intuitive. It offers dynamic typing, meaning you can write code quickly without specifying types. This can make runtime errors harder to track sometimes, but debugging tools like IEx (Interactive Elixir) and Logger make troubleshooting straightforward.

    It’s also easier to refactor because of its dynamic nature and process isolation. Since BEAM isolates processes, refactoring can often be done incrementally without disrupting the rest of the system. This is particularly handy in long-running, real-time applications where uptime is crucial.

    Haskell, on the other hand, enforces strict type safety and a pure functional model, which makes debugging less frequent but more complex. As we mentioned earlier, the compiler catches most issues before runtime, reducing unexpected behaviour.

    However, this strictness means that refactoring in Haskell must be done carefully to maintain type compatibility, module integrity, and scope resolution. Unlike dynamically typed languages, where refactoring is often lightweight, Haskell’s strong type system and module dependencies can make certain refactorings more involved, especially when they affect function signatures or module structures.

    Research on Haskell refactoring highlights challenges like name capture, type signature compatibility, and module-level dependency management, which require careful handling to preserve correctness.

    Then, there’s pattern matching, which both languages use, but do it differently.

    Elixir’s pattern matching is flexible and widely used in function definitions and control flow, making code more readable and expressive.

    Haskell’s pattern matching is type-driven and enforced by the compiler, ensuring exhaustiveness but requiring a more upfront design.

    So, which of the two is easier to maintain?

    Elixir is better suited for fast-moving projects where codebases evolve frequently, thanks to its fault-tolerant design and incremental refactoring capabilities.

    Haskell provides stronger guarantees of correctness, making it a better choice for mission-critical applications where stability outweighs development speed.

    Compilation speed

    One often overlooked difference between Elixir and Haskell is how they handle compilation and code updates.

    Elixir benefits from BEAM’s hot code swapping, where updates can be applied without stopping a running system. Additionally, Elixir compiles faster than Haskell because it doesn’t perform extensive type checking at compile time.

    This speeds up development cycles, which is what makes Elixir well-suited for projects requiring frequent updates and rapid iteration. However, since BEAM uses Just-In-Time (JIT) compilation, some optimisations happen at runtime rather than during compilation.

    Haskell, on the other hand, has a much stricter compilation process. The compiler performs heavy type inference and optimisation, which increases compilation time but results in highly efficient, predictable code.

    Learning curve

    Elixir is often considered easier to learn than Haskell. Its syntax is clean and approachable, especially if you’re coming from Ruby, Python, or JavaScript. The dynamic typing and friendly error messages make it easy to experiment without getting caught up in strict type constraints.

    Haskell, on the other hand, has a notoriously steep learning curve. It requires a shift in mindset, especially for those unfamiliar with pure functional programming, monads, lazy evaluation, and advanced type systems. While it rewards those who stick with it, the initial learning experience can be challenging, even if you’re an experienced developer.

    Metaprogramming

    Both Elixir and Haskell allow you to write highly flexible code, but they take different approaches.

    Elixir provides macros, which you can modify and extend the language at compile time. This makes it easy to generate boilerplate code, create domain-specific languages (DSLs), and build reusable abstractions. However, improper use of macros can make code harder to debug and maintain.

    Haskell doesn’t have macros but compensates with powerful type-level programming. Features like type families and higher-kinded types allow you to enforce complex rules at the type level. This enables incredible flexibility, but it also makes the language even harder to learn.

    Choosing between the two

    As you’ve seen, both Elixir and Haskell can be great, if used correctly in the right circumstances.

    If you do choose Elixir, we’ve got a great resource that discusses how Elixir and Erlang—the language that forms its foundation—can help in future-proofing legacy systems. Find out how their reliability and scalability make them great for modernising infrastructures.

    [ Read more ]

    Want to learn more? Drop the Erlang Solutions team a message.

    The post Elixir vs Haskell: What’s the Difference? appeared first on Erlang Solutions .

    • chevron_right

      Mathieu Pasquet: slixmpp v1.9.1

      news.movim.eu / PlanetJabber • 11 March

    This is mostly a bugfix release over version 1.9.0 .

    The main fix is the rust JID implementation that would behave incorrectly when hashed if the JID contained non-ascii characters. This is an important issue as using a non-ascii JID was mostly broken, and interacting with one failed in interesting ways.

    Fixes

    • The previously mentioned JID hash issue
    • Various edge cases in the roster code
    • One edge case in the MUC ( XEP-0045 ) plugin in join_muc_wait
    • Removed one broken entrypoint from the package
    • Fixed some issues in the MUC Self-Ping ( XEP-0410 ) plugin

    Enhancements

    • Stanza objects now have a __contains__ (used by x in y ) method that allow checking if a plugin is present.
    • The You should catch Iq… exceptions message now includes the traceback
    • The MUC Self-Ping ( XEP-0410 ) plugin allows custom intervals and timeouts for each MUC.
    • Added a STRICT_INTERFACE mode (currently a global var in the stanzabase module) that controls where accessing a non-existing stanza attribute should raise or warn, it previously only warned.
    • The CI does more stuff
    • More type hints here and there

    Links

    You can find the new release on codeberg , pypi , or the distributions that package it in a short while.

    • wifi_tethering open_in_new

      This post is public

      blog.mathieui.net /en/slixmpp-1.9.1.html

    • chevron_right

      Erlang Solutions: Understanding Big Data in Healthcare

      news.movim.eu / PlanetJabber • 6 March • 7 minutes

    Healthcare generates large amounts of data every day. From patient records and medical scans to treatment plans and clinical trials. This information, known as big data, has the potential to improve patient care, improve efficiency, and drive innovation. But many organisations are still figuring out how to use it effectively.


    With AI-driven analytics, wearable technology, and real-time monitoring, healthcare providers, insurers, and pharmaceutical companies are using data to make better decisions for patients, personalise treatments, and predict health trends. So how can you do the same?

    Let’s explore the fundamentals of big data in healthcare, its real-world impact and what steps leaders can take to maximise its growing impact.

    What is Big Data?

    Big data refers to the vast amounts of structured and unstructured information from patient records, medical imaging, wearables, and clinical research. Proper analysis can improve patient care, support better decision-making, and reduce costs.

    This data comes from a wide range of sources, including electronic health records (EHRs), test results, diagnoses, medical images, and real-time data from smart wearables. It also includes healthcare-related financial and demographic information. When properly analysed, it helps identify patterns, predict health trends, and support evidence-based decision-making.

    The global healthcare market is expanding quickly and is expected to be worth USD 145.42 billion by 2033. As more organisations adopt AI-driven analytics and machine learning, data is becoming a key driver of innovation, helping healthcare professionals deliver more personalised and effective care.

    The Three V’s of Big Data

    To better understand big data, we can break it down into three key characteristics: volume, velocity, and variety.

    Big Data in Healthcare 3 v's

    1. Volume

    The industry produces massive amounts of data, from electronic health records (EHRs) and medical imaging to clinical research and wearable devices. The total volume of healthcare data doubles every 73 days. Managing this requires advanced storage solutions, such as cloud computing and NoSQL databases , to handle both structured and unstructured data effectively.

    2. Velocity

    Healthcare data is constantly being created. Real-time data streams from patient monitoring systems, wearable technology , and AI-powered diagnostics provide continuous updates. To be useful, this data must be processed instantly, allowing professionals to make fast, informed decisions that support better patient care.

    3. Variety

    Healthcare data comes in many formats, from structured databases to unstructured text, images, videos, and biometric data . Around 80% of healthcare data is unstructured, meaning it doesn’t fit neatly into traditional databases. A patient’s medical history might include lab results, prescriptions, clinician notes, and radiology reports, all in different formats. Integrating and analysing this diverse information is essential for identifying trends and improving treatments.

    Mastering these three V’s helps healthcare organisations make better use of data, leading to more accurate diagnoses, personalised treatments, and improved patient outcomes.

    Key Sources of Healthcare Data

    Now that we’ve discussed the Three V’s , it’s important to explore where this data originates. The primary sources of healthcare data contribute to the massive volumes of information, real-time updates, and diverse formats that we’ve just covered.

    Here are some of the key sources:

    • Electronic Health Records (EHRs) & Medical Records (EMRs) – Digital records containing patient histories, test results, and prescriptions.
    • Wearable Devices & Health Apps – Smartwatches, fitness trackers, and remote monitoring tools that gather real-time health metrics.
    • Medical Imaging & Genomic Data – X-rays, MRIs, and DNA sequencing that assist in diagnostics, research, and precision medicine.
    • Clinical Trials & Research Databases – Data from large-scale studies that drive medical advancements and evidence-based medicine.
    • Public Health & Epidemiological Data – Population health data that track disease trends and guide public health strategies.
    • Hospital Information Systems (HIS) & Administrative Data – Operational data that help manage resources and patient flow within healthcare facilities.

    These sources contribute to the expanding pool of healthcare data, helping organisations make smarter decisions and deliver better care for patients.

    Benefits of Big Data in Healthcare

    As healthcare organisations continue to collect more data, big data is proving to be a game-changer in improving patient care, driving clinical outcomes, and making healthcare more efficient. By analysing vast amounts of information, providers can identify trends and patterns that may have otherwise gone unnoticed. Below are some of the key benefits that big data brings to healthcare, from better patient care to more effective operations.

    Benefit Description Impact
    Improved Patient Care Identifies patterns to predict and prevent diseases, enabling personalised care. Could save the healthcare industry £230 billion to £350 billion annually through improved care and efficiency.
    Cost Reduction Optimises resource allocation, reduces waste, and improves efficiency. Predictive analytics can cut hospital readmissions by up to 20% , leading to significant savings.
    Enhanced Clinical Outcomes Integrates data to identify the most effective treatments for patients. Improves clinical decision-making with real-time insights and evidence-based recommendations.
    Accelerated Medical Research Offers large datasets for faster analysis, cutting clinical trial time and costs. Reduces c linical trial times by 30% and associated costs by 50%.
    Predictive Analytics Forecasts patient needs, improving outcomes and reducing readmissions. Helps optimise resources and reduce readmission rates, improving care and reducing costs.
    Precision Medicine Tailors treatments based on individual characteristics like genetics. Big Data enables more targeted and effective treatment plans.
    Population Health Management Identifies at-risk populations for targeted interventions. Reduces the prevalence of chronic diseases through early detection and personalised care.
    Operational Efficiency Improves processes like inventory management and reduces waste. Enhances resource management, reduces costs, and improves service delivery.

    Data Privacy and Security in Healthcare

    While big data enhances patient care and efficiency, it also brings critical data security challenges. IBM’s 2024 Cost of a Data Breach report highlights the average healthcare breach costs $9.77 million. Protecting patient data is crucial for maintaining trust and avoiding risks.

    Understanding Big Data in Healthcare stats

    Source: Cost of Data Breach Report, IBM

    Key Challenges in Healthcare Data Security

    Several issues make healthcare data security more difficult:

    Challenge Details
    Outdated Systems Older systems may have security gaps that hackers can exploit.
    Weak Passwords Simple or reused passwords make it easier for unauthorised people to access sensitive data.
    Internal Threats Employees or contractors could accidentally or intentionally compromise data security.
    Mobile and Cloud Security As healthcare uses more mobile devices and cloud storage, keeping data safe across different platforms becomes harder.

    With so much data being collected and shared, these challenges are becoming more complex, making it crucial to stay on top of security measures.

    Regulatory Framework: HIPAA and Beyond

    In the U.S., the Health Insurance Portability and Accountability Act (HIPAA) sets the rules for protecting patient data. While HIPAA covers the basics, healthcare organisations need to stay on top of evolving security threats and regulations as technology changes.

    Besides HIPAA, other important regulations include the HITECH Act , which supports the use of electronic health records (EHRs) and strengthens privacy protections, and the General Data Protection Regulation (GDPR) in the European Union, which controls how personal data is used and gives patients more control over their information.

    In our previous blog, The Golden Age of Data in Healthcare , we touched on the challenges that come with using new technologies like blockchain. While blockchain offers secure data storage, it also raises concerns around data ownership and staying compliant with rules like HIPAA and GDPR.

    Solutions to Enhance Healthcare Data Security

    To better protect patient data, healthcare organisations should implement:

    • Data Encryption : Keeps data secure even if intercepted.
    • Multi-Factor Authentication (MFA) : Adds an extra layer of security by requiring more than just a password.
    • System Monitoring and Threat Detection : Monitoring systems for unusual activity helps quickly spot potential breaches.
    • Employee Training : Teaching staff about security best practices and how to spot phishing attempts helps reduce risks.

    By following clear security measures and meeting regulatory requirements, organisations can prevent breaches and keep patient trust intact.

    Enhancing Healthcare Security with Erlang, Elixir, and SAFE

    As we’ve seen, healthcare faces ongoing security challenges such as outdated systems, weak passwords, internal threats, and securing mobile and cloud data. Erlang and Elixir , by their very nature, offer solutions to these problems.

    • Outdated systems: Erlang and Elixir are built for high availability and fault tolerance, ensuring critical healthcare systems remain operational without the risk of system failures, even when legacy infrastructure is involved.
    • Weak passwords & internal threats: Both technologies provide process isolation and robust concurrency, limiting the impact of internal threats and reducing the risk of unauthorised access.
    • Mobile and cloud security: With Erlang and Elixir’s scalability and resilience, securing data across mobile platforms and cloud environments becomes easier, supporting secure, seamless data exchanges.

    To further bolster security, SAFE (Security Audit for Erlang/Elixir) helps healthcare providers identify vulnerabilities in their systems. This service:

    • Identifies vulnerabilities in code that could expose systems to attacks.
    • Assesses risk levels to prioritise fixes.
    • Provides detailed reports that outline specific issues and solutions.

    By combining the inherent security benefits of Erlang and Elixir with the proactive audit capabilities of SAFE, healthcare organisations can safeguard patient data, reduce risk, and stay compliant with regulations like HIPAA.

    Conclusion

    Big data is transforming healthcare by improving patient care and outcomes. However, with this growth comes the need to secure sensitive data and ensure compliance with regulations like HIPAA and GDPR.

    Erlang and Elixir naturally address key security challenges, helping organisations protect patient information. Tools like SAFE identify vulnerabilities, reduce risks, and ensure compliance.

    Ultimately, securing patient data is critical for maintaining trust and delivering quality care. If you would like to talk more about securing your systems or staying compliant, contact our team.

    The post Understanding Big Data in Healthcare appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: Top 5 IoT Business Security Basics

      news.movim.eu / PlanetJabber • 27 February • 9 minutes

    IoT is now a fundamental part of modern business. With more than 17 billion connected devices worldwide, IoT business security is more important than ever. A single breach can expose sensitive data, disrupt operations, and damage a company’s reputation.

    To help safeguard your business, we’ll cover five essential IoT security basics: data encryption, strong password policies, regular security audits, employee awareness training, and disabling unnecessary features.

    1) Secure password practices

    Weak passwords make IoT devices susceptible to unauthorised access, leading to data breaches, privacy violations and increased security risks. When companies install devices, without changing default passwords or by creating oversimplified ones, they create a gateway entry point for attackers. Implementing strong and unique passwords can ensure the protection of these potential threats.

    Password managers

    Each device in a business should have its own unique password that should change on a regular basis. According to the 2024 IT Trends Report by JumpCloud, 83% of organisations surveyed use password-based authentication for some IT resources.

    Consider using a business-wide password manager to store your passwords securely, and that allows you to use unique passwords across multiple accounts.

    Password managers are also incredibly important as they:

    • Help to spot fake websites, protecting you from phishing scams and attacks.
    • Allow you to synchronise passwords across multiple devices, making it easy and safe to log in wherever you are.
    • Track if you are re-using the same password across different accounts for additional security.
    • Spot any password changes that could appear to be a breach of security.

    Multi-factor authentication (MFA)

    Multi-factor authentication (MFA) adds an additional layer of security. It requires additional verification beyond just a password, such as SMS codes, biometric data or other forms of app-based authentication. You’ll find that many password managers offer built-in MFA features for enhanced security.

    Some additional security benefits include:

    • Regulatory compliance
    • Safeguarding without password fatigue
    • Easily adaptable to a changing work environment
    • An extra layer of security compared to two-factor authentication (2FA)

    As soon as an IoT device becomes connected to a new network, it is strongly recommended that you reset any settings with a secure, complex password. Using password managers allows you to generate unique passwords for each device to secure your IoT endpoints optimally.

    2) Data encryption at every stage

    Why is data encryption so necessary? With the increased growth of connected devices, data protection is a growing concern. In IoT, sensitive information (personal data, financial, location etc) is vulnerable to cyber-attacks if transmitted over public networks. When done correctly, data encryption renders personal data unreadable to those who don’t have outside access. Once that data is encrypted, it becomes safeguarded, mitigating unnecessary risks.

    IoT security data encryption

    Additional benefits to data encryption

    How to encrypt data in IoT devices

    There are a few data encryption techniques available to secure IoT devices from threats. Here are some of the most popular techniques:

    Triple Data Encryption Standard (Triple DES): Uses three rounds of encryption to secure data, offering a high-level of security used for mission-critical applications.

    Advanced Encryption Standard (AES) : A commonly used encryption standard, known for its high security and performance. This is used by the US federal government to protect classified information.

    Rivest-Shamir-Adleman (RSA): This is based on public and private keys, used for secure data transfer and digital signatures.

    Each encryption technique has its strengths, but it is crucial to choose what best suits the specific requirements of your business.

    Encryption support with Erlang/Elixir

    When implementing data encryption protocols for IoT security, Erlang and Elixir offer great support to ensure secure communication between IoT devices. We go into greater detail about IoT security with Erlang and Elixir in a previous article, but here is a reminder of the capabilities that make them ideal for IoT applications:

    1. Concurrent and fault-tolerant nature: Erlang and Elixir have the ability to handle multiple concurrent connections and processes at the same time. This ensures that encryption operations do not bottleneck the system, allowing businesses to maintain high-performing, reliable systems through varying workloads.
    2. Built-in libraries: Both languages come with powerful libraries, providing effective tools for implementing encryption standards, such as AES and RSA.
    3. Scalable: Both systems are inherently scalable, allowing for secure data handling across multiple IoT devices.
    4. Easy integration: The syntax of Elixir makes it easier to integrate encryption protocols within IoT systems. This reduces development time and increases overall efficiency for businesses.

    Erlang and Elixir can be powerful tools for businesses, enhancing the security of IoT devices and delivering high-performance systems that ensure robust encryption support for peace of mind.

    3) Regular IoT inventory audits

    Performing regular security audits of your systems can be critical in protecting against vulnerabilities. Keeping up with the pace of IoT innovation often means some IoT security considerations get pushed to the side. But identifying weaknesses in existing systems allows organisations to implement a much-needed strategy.

    Types of IoT security testing

    We’ve explained how IoT audits are key in maintaining secure systems. Now let’s take a look at some of the common types of IoT security testing options available:

    IoT security testing

    IoT security testing types

    Firmware software analysis

    Firmware analysis is a key part of IoT security testing. It explores the firmware, the core software embedded into the IoT hardware of IoT products (routers, monitors etc). Examining the firmware means security tests can identify any system vulnerabilities, that might not be initially apparent. This improves the overall security of business IoT devices.

    Threat modelling

    In this popular testing method, security professionals create a checklist based on potential attack methods, and then suggest ways to mitigate them. This ensures the security of systems by offering analysis of necessary security controls.

    IoT penetration testing

    This type of security testing finds and exploits security vulnerabilities in IoT devices. IoT penetration testing is used to check the security of real-world IoT devices, including the entire ecosystem, not just the device itself.

    Incorporating these testing methods is essential to help identify and mitigate system vulnerabilities. Being proactive and addressing these potential security threats can help businesses maintain secure IoT infrastructure, enhancing operational efficiency and data protection.

    4) Training and educating your workforce

    Employees can be an entry point for network threats in the workplace.

    The time of BYOD (bring your own devices) where an employee’s work supplies would consist of their laptops, tablets and smartphones in the office to assist with their tasks, is long gone. Now, personal IoT devices are also used in the workplace. Think of your popular wearables like smartwatches, fitness trackers, e-readers and portable game consoles. Even portable appliances like smart printers and smart coffee makers are increasingly popular in office spaces.

    Example of increasing IoT devices in the office. Source: House of IT

    The use of various IoT devices throughout your business network is the most vulnerable target for cybercrime, using techniques such as phishing and credential hacking or malware.

    Phishing attempts are among the most common. Even the most ‘tech-savvy’ person can fall victim to them. Attackers are skilled at making phishing emails seem legitimate, forging real domains and email addresses to appear like a legitimate business.

    Malware is another popular technique concealed in email attachments, sometimes disguised as Microsoft documents, unassuming to the recipient.

    Remote working and IoT business security

    Threat or malicious actors are increasingly targeting remote workers. Research by Global Newswire shows that remote working increases the frequency of cyber attacks by a staggering 238%.

    The nature of remote employees housing sensitive data on various IoT devices makes the need for training even more important. There is now a rise in companies moving to secure personal IoT devices that are used for home working, with the same high security as they would corporate devices.

    How are they doing this? IoT management solutions. They provide visibility and control over other IoT devices. Key players across the IoT landscape are creating increasingly sophisticated IoT management solutions, helping companies administer and manage relevant updates remotely.

    The use of IoT devices is inevitable if your enterprise has a remote workforce.

    Regular remote updates for IoT devices are essential to ensure the software is up-to-date and patched. But even with these precautions, you should be aware of IoT device security risks and take steps to mitigate them.

    Importance of IoT training

    Getting employees involved in the security process encourages awareness and vigilance for protecting sensitive network data and devices.

    Comprehensive and regularly updated education and training are vital to prepare end-users for various security threats. Remember that a business network is only as secure as its least informed or untrained employee.

    Here are some key points employees need to know to maintain IoT security :

    • The best practices for security hygiene (for both personal and work devices and accounts).
    • Common and significant cybersecurity risks to your business.
    • The correct protocols to follow if they suspect they have fallen victim to an attack.
    • How to identify phishing, social engineering, domain spoofing, and other types of attacks.

    Investing the time and effort to ensure your employees are well informed and prepared for potential threats can significantly enhance your business’s overall IoT security standing.

    5) Disable unused features to ensure IoT security

    Enterprise IoT devices come with a range of functionalities. Take a smartwatch, for example. Its main purpose as a watch is of course to tell the time, but it might also include Bluetooth, Near-Field Communication (NFC), and voice activation. If you aren’t using these features, then you’re opening yourself up for hackers to potentially breach your device. Deactivation of unused features reduces the risk of cyberattacks, as it limits the ways for hackers to breach these devices.

    Benefits of disabling unused features

    If these additional features are not being used, they can create unnecessary security vulnerabilities. Disabling unused features helps to ensure IoT security for businesses in several ways:

    1. Reduces attack surface : Unused features provide extra entry points for attackers. Disabling features limits the number of potential vulnerabilities that could be exploited, in turn reducing attacks overall.
    2. Minimises risk of exploits : Many IoT devices come with default settings that enable features which might not be necessary for business operations. Disabling these features minimises the risk of weak security.
    3. Improves performance and stability : Unused features can consume resources and affect the performance and stability of IoT devices. By disabling them, devices run more efficiently and are less likely to experience issues that could be exploited by attackers.
    4. Simplifies security management : Managing fewer active features simplifies security oversight. It becomes simpler to monitor and update any necessary features.
    5. Enhances regulatory compliance : Disabling unused features can help businesses meet regulatory requirements by ensuring that only the necessary and secure functionalities are active.

    To conclude

    The continued adoption of IoT is not stopping anytime soon. Neither are the possible risks. Implementing even some of the five tips we have highlighted can significantly mitigate the risks associated with the growing number of devices used for business operations.

    Ultimately, investing in your business’s IoT security is all about safeguarding the entire network, maintaining the continuity of day-to-day operations and preserving the reputation of your business. Want to learn more about keeping your IoT offering secure? Don’t hesitate to drop the Erlang Solutions team a message.

    The post Top 5 IoT Business Security Basics appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/top-5-tips-to-ensure-iot-security-for-your-business/

    • chevron_right

      Erlang Solutions: Highlights from CodeBEAM Lite London

      news.movim.eu / PlanetJabber • 20 February • 6 minutes

    The inaugural CodeBEAM Lite London conference was held at CodeNode last month, featuring 10 talks, 80 attendees, and an Erlang Solutions booth. There, attendees had the chance to set a high score in a BEAM-based asteroid game created by ESL’s Hernan Rivas Acosta, and win an Atari replica.

    Learning from and networking with experts across the BEAM world was an exciting opportunity. Here are the highlights from the talks at the event.

    Keynote: Gleam’s First Year

    Louis Pilfold kicked things off with an opening keynote all about Gleam , the statically-typed BEAM language he designed and developed, and which announced its version 1.0 a year ago at FOSDEM in Brussels.

    Louis laid out the primary goals of v1: productivity and sustainability, avoiding breaking changes and language bloat, and extensive, helpful, and easily navigable documentation. He then walked us through some of the progress made on Gleam in its first year of official release, with a particular focus on the many convenience and quality-of-life features of the language server, written in Rust. Finally, he measured Gleam’s success throughout 2024 in terms of Github usage and sponsorship money and looked forward to his goals for the language in 2025.

    The Art of Writing Beautiful Code

    “Make it work, then make it beautiful, then if you really, really have to, make it fast. 90 per cent of the time, if you make it beautiful, it will already be fast. So really, just make it beautiful!” Most of us are likely familiar with this famous Joe Armstrong quote, but what does it actually mean to write beautiful code?

    This question was the focus of Brujo Benavides’ talk, a tour through various examples of “ugly” code in Erlang, some of which may well be considered beautiful by programmers trying to avoid repeating code. If beauty is in the eye of the beholder, what’s more important is that each project has a consistent definition of what “beautiful” means. Brujo explored different methods of achieving this consistency, and how to balance it with the need for fast commits of important changes in a project.

    Why Livebook is My Dream Data Science Workbench

    Amplified ’s Christopher Grainger took a more cerebral approach to his talk on Livebook, drawing on his background as both a historian and a data scientist to link the collaborative notebook software to a tradition of scientific collaboration dating back thousands of years.

    In his view, the fragmentation of the digital age led to key components of this tradition being lost; he explored how LiveBook’s BEAM architecture brings it closer to being a digital equivalent of real-time collaboration in a lab than prior technologies like Jupyter Notebooks, and what further steps could be taken to get even closer to it in the future.

    Deploying Elixir on Azure With Some Bonus Side Quests

    Matteo Gheri of Pocketworks provided an industrial example of Elixir in action, explaining how his company used Azure in the course of building a Phoenix app for UK-based taxi company Veezu.

    Azure is used to host only 3.2% of Elixir apps, and Matteo walked through their journey figuring it out in detail, touching on deployment, infrastructure, CI/CD, and the challenges they encountered.

    Let’s Talk About Tests

    Erlang Solutions’ own Natalia Chechina took the stage next for a dive into the question of tests. She explored ways of convincing managers of the importance of testing, which types of test to prioritise depending on the circumstances of the project, and how to best structure testing in order to prevent developers from burning out, stressing the importance of both making testing a key component of the development cycle and cultivating a positive attitude towards testing.

    Eat Your Greens: A Philosophy for Language D esign

    Replacing Guillaume Duboc’s cancelled talk on Elixir types was Peter Saxton, developer of a new language called Eat Your Greens (EYG). The philosophy behind the title refers to doing things that may be boring or unenjoyable but which lead to benefits in the long run, such as eating vegetables; Peter cited types as an example of this, and as such EYG is statically, structurally, and soundly typed. He then walked through other main features of his language, such as closure serialisation as JSON, hot code reloading, and the ability for it to be run entirely through keyboard shortcuts.

    Trade-Offs Using JSON in Elixir 1.18: Wrappers vs. Erlang L ibraries

    Michał Muskała has a long history working with JSON on the BEAM, starting with his development of the Jason parser and generator, first released in 2017. He talked us through that history; writing Jason, turning his focus to Erlang/OTP and proposing a JSON module there, and then building on that for the Elixir JSON module, now part of the standard library in 1.18.

    He discussed the features of this new module, why it was better to use wrappers while transitioning to Elixir instead of calling Erlang directly, and how to simplify migration from Jason to JSON in advance of OTP 27 eventually being required by Elixir.

    Distributed AtonVM: Let’s Create Clusters of Microcontrollers

    A useless machine and a tiny, battery-free LED device played central roles in Paul Guyot’s dive into AtomVM , an Erlang- and Elixir-based virtual machine for microcontrollers. He kicked off by demonstrating La machine, the first commercial AtomVM product, albeit without an internet connection, before explaining AtomVM’s intended use in IoT devices, and the recent addition of distributed Erlang. This was backed up by another demonstration, this time of the appropriately named “2.5g of Erlang” device. Finally, he explained AtomVM’s advantages compared to other IoT VMs and identified the next steps for the project.

    Erlang and RabbitMQ: The Erlang AMQP Client in Action

    Katleho Kanyane from Erlang Solutions then provided another industry use case, discussing how he helped to implement a RabbitMQ publisher using the Erlang AMQP client library while working with a large fintech client. Katleho talked through some of the basics of RabbitMQ implementation, best practices, and two issues he ran into involving flow control, an overload prevention feature in RabbitMQ that throttles components and leads to drastically reduced transfer rates. He wrapped up by discussing lessons he learned from the process and laying out a few guidelines for designing a publisher.

    Keynote: Introducing Tau5 – A New BEAM-Powered Live Coding Platform

    The closing keynote was also the only talk of the day to kick off with a music video, though that should be expected when live coding artist and Sonic Pi creator Sam Aaron is the one delivering it. Sam spoke passionately about his goal to make programming something that everyone should be able to try without needing or wanting to become a professional and discussed his history of using Sonic Pi’s live coding software in education, including how he worked some complicated concepts such as concurrency in without confusing students or teachers.

    He then discussed the limitations of Sonic Pi and how they are addressed by his new project, Tau5. While still in the proof-of-concept stage, Tau5 improves on Sonic Pi by being built on OTP from the ground up, being able to run in the browser, and including new features like visuals to add to live performances. He concluded with a demonstration of Tau5 and an explanation of his intentions for the project.

    Final Thoughts

    CodeBEAM Lite London 2025 was a fantastic day filled with fascinating talks, cool demos, and plenty more to excite any BEAM enthusiast. From hearing about the latest Gleam developments to diving into live coding with Tau5, it was clear that the community is thriving and full of creative energy. Whether it was learning tips for practical BEAM use or exploring cutting-edge new tools and languages, there was something for everyone.

    If you missed out this time, don’t worry: you’ll be welcome at the next one, and we hope to see you there. Until then, keep building, keep experimenting, and above all keep having fun with the BEAM!

    The post Highlights from CodeBEAM Lite London appeared first on Erlang Solutions .

    • chevron_right

      Erlang Solutions: DORA Compliance: What Fintech Businesses Need to Know

      news.movim.eu / PlanetJabber • 12 February • 7 minutes

    The Digital Operational Resilience Act (DORA) is now in effect as of 17th January 2025, making compliance mandatory for fintech companies, financial institutions, and ICT providers across the UK and EU. With over 22,000 businesses impacted, DORA sets clear expectations for how firms must manage operational resilience and protect against cyber threats.

    As cybercriminals become more sophisticated, regulatory action has followed. DORA is designed to ensure that businesses have the right security measures in place to handle disruptions, prevent data breaches, and stay operational under pressure.

    Yet, despite having time to prepare, 43% of organisations admit they won’t be fully compliant for at least another three months. But non-compliance isn’t just a delay. It comes with serious risks, including penalties and reputational damage.

    So, what does DORA mean for your fintech business? Why is compliance so important, and how can you make sure you meet the requirements?

    What is DORA?

    With technology at the heart of financial services, the risks associated with cyber threats and ICT disruptions have never been higher. The European Parliament introduced the Digital Operational Resilience Act (DORA ) to strengthen the financial sector’s ability to withstand and recover from these digital risks.

    Originally drafted in September 2020 and ratified in 2022, DORA officially came into force in January 2025. It establishes strict requirements for managing ICT risks, ensuring financial institutions follow clear protection, detection, containment, recovery, and repair guidelines.

    A New Approach to Cybersecurity

    This regulation marks a major step forward in cybersecurity, prioritising operational resilience to keep businesses running even in the face of severe cyber threats or major ICT failures. Compliance will be monitored through a unified supervisory approach, with the European Banking Authority (EBA), the European Insurance and Occupational Pensions Authority (EIOPA), and the European Securities and Markets Authority (ESMA) working alongside national regulators to enforce the new standards.

    A report from the European Supervisory Authorities (EBA, EIOPA, and ESMA) highlighted that in 2024, of the registers analysed during a ‘dry run’ exercise involving nearly 1,000 financial entities across the EU, just 6.5% passed all data quality checks . This shows just how demanding the requirements are, and the importance of getting it right early for a smooth path to compliance.

    The Five Pillars of DORA

    DORA introduces firm rules on ICT risk management, incident reporting, resilience testing, and oversight of third-party providers. Rather than a one-size-fits-all approach, compliance depends on factors like company size, risk tolerance, and the type of ICT systems used. However, at its core, DORA is built around five key pillars that form the foundation of a strong operational resilience framework.

    Five Pillars of DORA for business

    Source: Zapoj

    These pillars also serve as the basis for a DORA compliance checklist , which businesses can use to ensure they meet regulatory requirements.

    Below is a breakdown of each pillar and what businesses need to do to comply:

    1. ICT Risk Management

    Businesses must establish a framework to identify, assess, and mitigate ICT risks. This includes:

    • Conducting regular risk assessments to spot vulnerabilities.
    • Implementing security controls to address identified risks.
    • Developing a clear incident response plan to handle disruptions effectively.

    2. ICT-Related Incident Reporting

    Companies must have structured processes to detect, report, and investigate ICT-related incidents. This involves:

    • Setting up clear reporting channels for ICT issues.
    • Classifying incidents by severity to determine response urgency.
    • Notifying relevant authorities promptly when serious incidents occur.

    3. Digital Operational Resilience Testing

    Financial institutions are required to test their ICT systems regularly to ensure they can withstand cyber threats and operational disruptions . This includes:

    • Running simulated attack scenarios to test security defences.
    • Assessing the effectiveness of existing resilience measures.
    • Continuously improving systems based on test results.

    4. ICT Third-Party Risk Management

    DORA highlights the importance of managing risks linked to third-party ICT providers . Businesses must:

    • Conduct due diligence before working with external service providers.
    • Establish contractual agreements outlining security expectations.
    • Continuously monitor third-party performance to ensure compliance.

    5. Information Sharing

    Collaboration is a key part of DORA, with financial institutions encouraged to share cyber threat intelligence . This may include:

    • Participating in industry forums to stay informed about emerging threats.
    • Sharing threat intelligence with peers to strengthen collective defences.
    • Conducting joint cybersecurity exercises to improve incident response.

    By following these five pillars, businesses can build a strong foundation for digital resilience . Compliance isn’t just about meeting regulatory requirements, it’s about safeguarding operations, protecting customers, and strengthening the financial sector against growing cyber threats.

    How to Achieve DORA Compliance for Your Business

    Regardless of the stage of compliance a business is in, there are a few key areas that must be focused on to protect themselves. Here’s what you need to do:

    Understand DORA’s Scope and Requirements

    The first step to DORA compliance is understanding what’s required. Take the time to familiarise yourself with its requirements and ask any questions.

    Conduct a Risk Assessment

    A solid risk assessment is at the heart of DORA compliance. Identify and evaluate risks across your ICT systems—this includes everything from cyber threats to software glitches. Understanding these risks helps you plan how to minimise their impact on your operations.

    Create a Resilience Strategy

    With your risk assessment in hand, develop a tailored resilience strategy. This should include:

    • Preventive Measures : Set up cyber defences and redundancy systems to prevent disruptions.
    • Detection Systems : Ensure you can quickly spot any anomalies or threats.
    • Response and Recovery Plans : Have clear plans in place to respond and recover if an incident happens.

    Invest in Cybersecurity and IT Infrastructure

    To meet DORA compliance for business, invest in strong cybersecurity tools like firewalls and encryption. Ensure your IT infrastructure is resilient, with reliable backup and recovery systems to minimise disruptions.

    Strengthen Incident Reporting

    DORA stresses the importance of quick and accurate incident reporting. Establish clear channels for detecting and reporting ICT incidents, ensuring timely updates to authorities when needed.

    Build a Culture of Resilience

    Resilience is an ongoing effort. To stay compliant, create a culture where resilience is top of mind:

    • Provide regular staff training .
    • Regularly test and audit your systems.
    • Stay updated on emerging risks and technologies.

    Partner with IT Experts

    DORA compliance can be tricky, especially if your team lacks in-house expertise. Partnering with IT service providers who specialise in compliance can help you meet DORA’s requirements more smoothly.

    Consequences for Non-Compliance

    We’ve already established the importance of meeting DORA’s strict mandates. But failing to comply with these regulations can have serious consequences for businesses- from hefty fines to operational restrictions. Here’s what businesses need to be aware of to protect their organisation:

    Fines for Non-Compliance

    • Up to 2% of global turnover or €10 million, whichever is higher, for non-compliant financial institutions.
    • Third-party ICT providers could face fines as high as €5 million or 1% of daily global turnover for each day of non-compliance.
    • Failure to report major incidents within 4 hours can lead to further penalties.

    Reputational Damage and Leadership Liability

    • Public notices of breaches can cause lasting reputational damage, affecting business trust and relationships.
    • Business leaders can face personal fines of up to €1 million for failing to ensure compliance.

    Operational Restrictions

    • Regulators can limit or suspend business activities until compliance is achieved.
    • Data traffic records can be requested from telecommunications operators if there’s suspicion of a breach.

    How Erlang Solutions Can Help You with DORA Compliance

    Don’t panic, prioritise. If you’ve identified that your business may be at risk of non-compliance, taking action now is key. Erlang Solutions can support you in meeting DORA’s requirements through our Security Audit for Erlang and Elixir (SAFE) .

    With extensive experience in the financial sector, we understand the critical need for resilient, scalable systems. Our expertise with Erlang and Elixir has helped leading fintech institutions, including Klarna, Vocalink, and Ericsson , build fault-tolerant, high-performing and compliant systems.

    SAFE is aligned with several key areas of DORA, including ICT risk management, resilience testing, and third-party risk management:

    • Proactive Risk Identification and Mitigation : SAFE identifies vulnerabilities and provides recommendations to address risks before they become critical. This proactive approach supports DORA’s requirements for continuous ICT risk management.
    • Continuous Monitoring Capabilities : SAFE allows ongoing monitoring of your systems, which aligns with DORA’s emphasis on continuous risk detection and mitigation.
    • Detailed Incident Response Recommendations : SAFE’s detailed findings help you refine your incident response and recovery plans, ensuring your systems are prepared to quickly recover from cyberattacks or disruptions.

    Third-Party Risk Management : The security audit can provide insights into your third-party integrations, helping to ensure they meet necessary security standards and comply with DORA’s requirements.

    Conclusion

    DORA compliance is now in effect, making it essential to act if your business isn’t fully compliant. Delays can lead to penalties and increased risk exposure. Prioritising ICT risk management, strengthening resilience, and ensuring proper incident reporting will bring you closer to compliance. But this isn’t just about meeting requirements, it’s about safeguarding your organisation and building long-term operational resilience.

    If you have compliance concerns or just want to talk through your next steps, we’re here to help. Contact us to talk through your options.

    The post DORA Compliance: What Fintech Businesses Need to Know appeared first on Erlang Solutions .

    • wifi_tethering open_in_new

      This post is public

      www.erlang-solutions.com /blog/dora-compliance-what-fintech-businesses-need-to-know/

    • chevron_right

      ProcessOne: Join our community: Free Memberships now available

      news.movim.eu / PlanetJabber • 7 February • 1 minute

    We’re excited to announce a new way to connect with our community at Process-One. As of today, we’ve enabled free memberships on our site, giving you even more ways to stay updated, interact, and engage with our content.

    Why Sign Up?

    By becoming a member, you get access to specific benefits, including:

    • The ability to engage with our content in new ways, such as commenting on posts, participating in discussions like we did before and receiving exclusive insights.
    • A direct connection to the ProcessOne team and the latest updates on ejabberd , Fluux.io , and our other projects.
    • Notifications when new articles are published.

    Ghost’s free membership system is designed to help build an engaged community. It allows you to stay informed, participate actively, and create a closer connection —without any cost or commitment, while ensuring our content remains valuable to a genuine human audience.

    We have no plans for paid memberships ; our goal is simply to share updates about our projects and the XMPP ecosystem. Additionally, we respect your privacy—your email will only be used to notify you about new content, and we will never sell or misuse it.

    It&aposs Free and easy to join.

    Signing up is completely free —just create an account and start enjoying the benefits right away. No strings attached!

    Prefer RSS? We’ve Got You Covered

    If you prefer to follow updates through RSS, you can always subscribe to our feed and get the latest content delivered straight to your reader of choice. Subscribe over here . ;).

    We’re always looking for new ways to enhance the experience for our readers, and this is just the beginning. We hope you’ll join us and be part of our growing community!

    Sign up today and stay connected!

    • wifi_tethering open_in_new

      This post is public

      www.process-one.net /blog/join-our-community-free-memberships-now-available/

    • chevron_right

      Erlang Solutions: Women in BEAM

      news.movim.eu / PlanetJabber • 6 February • 14 minutes

    In this post, I will share the results of the Women in BEAM 2024 survey . But first, I would like to share my experience in the BEAM community to understand the motivation behind this initiative.

    My journey

    I’ve been working with Elixir since 2018, but my interest in it wasn’t driven by technical advantages—it was sparked by my experience at my first ElixirConf in Mexico.

    Since 2017, I’ve been involved in initiatives supporting women in tech, frequently attending events to learn and observe industry gender gaps. A major challenge I noticed was the barrier of seniority—many talks were difficult to follow for newcomers, and women, already underrepresented (often less than 30% of attendees), could feel even more excluded. Unfortunately, I grew used to this dynamic, but it was always awkward.

    In 2018, I was invited to ElixirConf Lite in Mexico City. From the start, I felt welcomed—no barriers, no judgment, just an open and friendly community. Inspired by this inclusivity, I decided to explore Elixir, later falling in love with its technical strengths.

    Since then, I’ve combined my passions for diversity in tech and Elixir. A few months ago, I committed to a focused initiative: the survey.

    Background

    I have been part of the Code BEAM America committee during the last three editions. I know the efforts made to have a gender-balanced panel and promote diversity at the conference, such as the Diversity & Inclusion Programme . Initiatives that have undoubtedly yielded results.

    For example, the following graph corresponds to the number of women at CodeBEAM America since 2015:

    Women in BEAM survey results, women at CodeBEAM America

    There is an increase between each edition for almost all of them, and there are some, such as the one in March 2021, where the percentage is nearly a quarter. However, getting female speakers remains a challenge every year.

    I know many women working with Elixir and some with Erlang or Gleam. When I invite them to give a talk, their common response is, “ Oh! I don’t think I have anything interesting to share”. I know it’s not true, but I don’t blame them because I know the feeling. Sometimes, I have stopped sharing content or talks for fear of not having enough experience, and I often get so nervous that I let the impostor syndrome win.

    As I mentioned earlier, my initial reason for getting interested in Elixir was inclusion. During all these years, I have never had a bad experience in the community, which led me to wonder what is behind these barriers. The cultural context has a lot to do with it, and it is not something specific to the BEAM community; however, I was interested in learning more about other women’s perspectives on the topic.
    There were a good number of responses for this first edition, and based on the open responses, I decided to focus the results on four main sections: Diversity in Roles , Challenges for Junior Developers , Programming Language Preferences , and Diversity and Inclusion .

    Survey Highlights

    The survey included many open-ended questions, and while all responses were different, some aspects were repeated across many, so the sections below are grouped based on similar responses.

    Diversity in Roles

    I decided to start with this section because role diversity is directly related to the topic of role models, which, from my perspective, is a determining factor in promoting greater participation of women in the BEAM community.

    According to this article , women occupy only 11% of leadership positions in technology . This represents a barrier for women working in the industry and new generations, who may not easily see themselves reflected in these numbers. Aspiring to a leadership position is much easier when you have an example in mind, whether it’s a public figure, a teacher, a coworker, etc. This also applies to open-source contributions, technical talks, and more.

    Therefore, it is essential to highlight the diversity of responses to the question about the primary role.
    The majority of women surveyed indicated that they are Software Developers/Engineers . I wasn’t surprised since most women I know in the community play this role, but I was thrilled that this wasn’t the only answer, so let’s dig deeper into those who indicated they played a different role.

    Women in BEAM survey results, Diversity in Roles
    • One woman shared that she is dedicated to research and teaching , a direct way to pass on knowledge and experience. BEAM languages are often overlooked in education, as functional programming isn’t typically prioritised, but having a mentor can change that.

    A teacher can encourage event participation, recommend key books, and even organise group attendance. Most importantly, integrating Elixir or Erlang into lessons sparks interest in new learners. Research also plays a vital role, inspiring students to explore deeply and cultivate the curiosity we value in the community.

    • There is one mention of a Technical Leader and two of an Engineering Manager , both refer to leadership positions that can represent role models for those aspiring to be team leaders and manage greater responsibilities. It is important to mention that the years of experience are different for the three answers. This breaks the myth that a position is associated with years of experience rather than with the knowledge and value these women bring to a team.
    • One respondent is a student , though her school level isn’t specified, so it’s unclear if she had prior BEAM experience. Still, it’s clear the BEAM community has successfully expanded its reach—not just in the workplace but also among students who can share their enthusiasm with peers and teachers.

    Finally, there is a Project Manager answer, which is a big plus for someone working with a team of developers. Experience in the technical side and the technologies used in a project or team allows for a deeper understanding and better technical suggestions; she can encourage attendance at events to improve the team’s skills and promote using BEAM languages ​​in other areas.

    Challenges for Junior Developers

    This section is interesting as the survey had no direct questions about juniors and their challenges. Still, I decided to add it because there was an open question about how easy it was for women to get a BEAM-related job. Even those who indicated that from their perspective it was easily mentioned that it depends on the years of experience and that for juniors, it is complex because companies prefer to hire someone with previous experience rather than train someone. Let’s analyze the answers:


    Most women surveyed said they had between 3 and 5 years of experience working with a BEAM programming language.

    Women in BEAM survey results, Years of Experience

    57.1% of the total indicated that they currently have a BEAM-related job , but despite this, 71.4% consider that it is not easy to find job opportunities.

    Women in BEAM survey results, BEAM-related job opportunities

    The reasons are mainly related to two factors: one is that on popular platforms such as LinkedIn, there are not as many offers as other technologies, and they do not know which other pages or media to look at. The second reason is due to the challenges that juniors face, and we will delve into that one for now.

    “It is difficult to get a job because (BEAM technologies) are not broadly used, and it is harder for many people to have previous production experience.”

    Many of the responses in this section agree that it is relatively easy to get a BEAM job when you already have at least two years of experience.

    “If you are a junior developer, getting a job is very hard / Most companies only offer senior positions.”

    I understand the problem, and in the end, it becomes a vicious circle: someone with no experience can’t get a job, but how can she get it if she can’t join a team? So, talking about external expertise beyond what a company can provide is essential.

    I love working with Elixir because you can start a project from scratch and see results quickly. The documentation and resources—tutorials, blogs, and books—are excellent, and the same likely applies to languages like Gleam.

    You can build experience through personal projects, coding challenges, or even creating a website. I enjoy writing to reinforce my learning, and if you do too, I encourage you to start a blog—it’s a great way to gain experience and make yourself visible.

    Here are some resources to get you started:

    Another indirect way to gain experience is by attending events. The 68.6% of women surveyed stated that they like attending virtual and in-person events, and 28.6% indicated that they only like virtual events.

    Women in BEAM survey results, BEAM events

    Attending meetups and conferences helps you learn about current technical challenges, BEAM updates, etc. Even if you are starting, it will give you an idea of ​​the topics you can focus on.

    These actions may seem irrelevant since they are not the same as saying that you have x years of experience in a company, but they will undoubtedly make a difference. They will also help you find the area where you would like to specialize or learn more, get to know the community, and open the possibility of finding mentors.

    Programming Language Preferences

    I’m an Elixir developer, so I initially decided to focus the survey on just that programming language, as it’s familiar to me. However, seeing content about Erlang and Gleam in the community is becoming more common, so I decided not to limit it, and I was pleasantly surprised by the diversity of responses.


    Most women indicated that Elixir is the primary programming language they use , but it was not the only one. In this question, 14.3% indicated they work with Erlang and 11.4% with Gleam.

    Women in BEAM survey results, Programming Language

    Additionally, the survey included a question about other technologies, either as a hobby or as a secondary language. Most women working with Elixir as a primary technology indicated that they were experimenting with Erlang as an additional language and vice versa . This is not surprising, as if you work with Elixir and dig into the fundamentals, you must explore Erlang. On the other hand, if your primary programming language is Erlang, it can be pretty fun and easy to explore Elixir.

    Women in BEAM survey results, BEAM programming language

    Something else worth mentioning in this question was that there was a mention of LFE and EML . I was surprised because at least I don’t know both fundamentals, but it made me think about everything I still have to explore in BEAM and the alternative options. In some way, it also motivated me to investigate more about it, and that is precisely the meaning of the community: sharing knowledge .

    As an extra, someone else mentioned that although it wasn’t a programming language per se, their favorite secondary technology was LiveView .If you, like me, are curious to explore everything that BEAM has to offer, you can find out more about it in the following list: Languages, and about languages, on the BEAM .

    Diversity and Inclusion

    I believe diversity and inclusion are strongly promoted in the BEAM community, setting it apart from other technologies. I was eager to hear other women’s perspectives, whether they share this view, and what actions we can take to improve further.

    This section explores several related questions in depth, but the key takeaway is that most agree diversity and inclusion are actively encouraged in the community.

    Of the total number of women surveyed, 82.9% consider that diversity and inclusion are promoted in the BEAM community, compared to 17.1% who think they are not.

    Diversity and Inclusion.

    Women who think no, indicated that this is because they know few or no other women in the company who work with any BEAM programming language and they do not know of any initiatives working on this topic, however, there is no reason beyond that, they have never had any gender issues and they like to attend community events.

    On the other hand, women who believe that these topics are promoted shared that the main reason is thanks to the warmth of the people. For example, at events where they felt safe to share without fear of being judged, or when you contact one of the pioneers on social media and have support and even mentoring in some cases.

    “From my experience at Code BEAM Europe , the BEAM community felt very welcoming. It seemed like a space where people could make mistakes, try new things, and learn together. That openness makes it easier for different perspectives to be part of the conversation.”

    So far, so good, the general outlook is positive, but there is a tricky aspect to mention: 4 women reported having had gender problems in the community. This question was a one-way question and I did not go into the subject in depth so as not to make these women uncomfortable, but it is certainly an aspect that needs to be worked on.

    Gender issues

    This gives us a lead into the next section, the steps to follow.

    Actionable Steps

    The actions listed here aren’t solely focused on gender issues but aim to make the BEAM community more inclusive, based on suggestions from the women surveyed.

    Gender Policies and Codes of Conduct

    Many respondents highlighted the need for clear gender policies and better awareness of them. They support reinforcing codes of conduct at conferences, ensuring attendees know who to contact if issues arise.

    One woman admired a company’s anti-harassment policy, and I agree—though few respondents reported problems, we must not minimise the issue. Strong community support makes addressing misconduct easier.

    Spaces Dedicated to Women in BEAM

    This was a recurring theme. Many women cited impostor syndrome as a barrier to participation and expressed interest in safe spaces to ask questions, practise talks, and seek advice.

    “In my case, I don’t feel 100% comfortable in the environment but I am not sure how to promote greater participation. Maybe it will help if we create a small subgroup for women/nonbinary in the community to promote ourselves or to share projects and ideas.”

    “Create women’s support groups. Where we can have learning sessions, mentors, talk about the working environment, talk about career levels to look forward to, give advice, etc.”

    Support for beginners

    Going back to the topic of the challenges faced by juniors, some of the suggestions are also to promote more content for those women who have little or no experience with BEAM and especially focus on the reasons why it is worth giving it a try.

    Role Models

    This is definitely my favourite measure. I have always been a supporter of promoting role models in technology to encourage more girls and teenagers to become interested in this, so I was delighted to know that this is a common opinion.

    Many of the women surveyed pointed out that having a role model in the community can help with the goal of getting more women interested and participating.

    “Highlighting the work of women already active in the community can make a difference. Seeing other women as speakers and leaders may encourage more to step forward.”

    “I think the more visible women are in the community, the more women will participate.”

    “Just seeing other women speak is an example to me. Seeing others who are relatable to me helps me realize I can just get up there and be me and speak on something I am interested in.”

    These are just a few of the related responses.

    Acknowledgements

    I would like to take this opportunity to mention the names of the people who came up in the survey, as many of the women mentioned that they do not have a female role model, but that along the way they have met men who support diversity and would like to acknowledge that.


    Laura Castro , Elaine Naomi , “ Tobias Pfeiffer who really advocates for diversity”, Robert Virding , Peer Stritzinger , Sigu Magwa , Sophie Benedetto , “Female role models are Ingela Andin from the OTP team, her history and dedication to working with the BEAM are great, and Hayleigh from the Gleam team, she is such a brilliant person”, “Some of my favourite folks I have seen speak, and who make me feel included in the community are: Meks McClure , Miki Rezentes , Jenny Bramble ”, and to the women who mentioned me, thank you so much, I want to tell you that you made me smile a lot.

    Women in BEAM Conclusion

    I would like to thank all the women who took part in the survey, and to everyone who shared it on social media or with colleagues. Most of all, thank you to those who care about diversity and inclusion and work to make the BEAM community better every day.

    I’ll be following up on all the comments and suggestions, and some women have even reached out to collaborate, which I’ll also pursue. Based on the responses, I’ve decided to make the survey an annual initiative. The details are still in the works, but I’ll keep you updated.

    Lastly, thanks to all the role models in companies, schools, and the community, who inspire more women to discover how incredible Women in BEAM is.

    See you in the next edition!

    The post Women in BEAM appeared first on Erlang Solutions .