TL;DR: Skeleton of The Article
The upcoming article will start with a gentle introduction on the topic of the framework of knowledge needed to introduce the Reactive Model into an organization, then it will go over on how to design a reactive system (keeping into account the difference if you have an already-existing system to rewrite or if you start from scratch), then it will present different real-world cases on when the Reactive Model is a good fit and when it’s not, and the conclusion is a recap on what we’ve discussed until now with a look at what’s coming next.
Introduction to the Reactive Model
This piece is the second part of a series of articles on Reactive Programming, the first part can be found here on our company blog.
In there, we introduced the concept of Reactive Programming, talking about what it is and what are its main advantages, comparing it to the classical approach of imperative/OOP style and showcasing an example of a real-world usage.
This is to say that the focus has been on the Reactive Programming paradigm itself, with no space to discuss about when it should be adopted and how, in terms of the changes, both at the programming and in minor part at the organizational level, that are needed to successfully embrace it.
The idea is that we are going to have a look at what is the right mindset that needs to be adopted before a paradigm like this can be attempted, and also how to spot the best use cases for such a programming model, showing that it performs better in some cases more than others and that we don’t need to do a full rewrite of the current applications that are already deployed, but the adoption of the change can be gradual and iterative, with new parts of the system being refactored over time.
Changing how applications are written and challenging the “status quo” of an engineering department is never easy and is a change that needs to be strongly motivated by the architect (or lead developer), gradual in terms of its adoption and embraced with a positive and learning mindset from the engineers that are going to be implementing it.
This is because it is important to remember that the Reactive Programming paradigm is a very different way of writing applications and designing systems and it takes time to appreciate it, and a lot of experience to master it, because the set of changes that come with it are quite substantial and involve all parts of the system, from the Data Layer (Database) to the Presentation Layer (HTTP).
It is possible to mix and interleave reactive and non-reactive code, but this needs to be done carefully and with the right tools at hand, otherwise any benefit introduced with the reactive components will be nullified by the blocking nature of the non-reactive code, stalling the whole pipeline and thus blocking a thread that should not be blocked.
In the following chapters, and later in the following articles, we will have a discussion on how to implement this Reactive Programming paradigm into your already-existing system and also when it’s better to do so, for which use cases this is the right choice and, when the time has come to make the jump and write a new service in all-reactive code, what are the best practices, both at the code level and at the architecture level.
It’s All About the Reactive Model
Before discussing when it’s the right time to implement reactive components and non-blocking services, it’s crucial we note a few things on what it means being reactive on a system design level.
In the previous article we had a look at the Reactive Model in terms of code and single components that form a Reactive Pipeline, but that’s only half of the picture.
The other half of best introduced with a number of questions, specifically the following ones:
- What does it mean for a system as a whole to be reactive?
- Where do responsibilities shift in terms of module ownership in a Reactive System?
- What is the key concept of a Reactive System?
- How are cross-cutting concerns of a system addressed in the Reactive Model?
The answer to these questions will clarify what is intended for “Reactive Model” when we talk about system design, how it is defined and what are its characteristics.
This Model forms, together with the notion of non-blocking operations and Reactive Pipeline, the building block in the “theory” of Reactive Programming, shifting the viewpoint from a “passive” system where modules are acted upon from external entities (due to delegation), to a “reactive” system where modules are responsible for their own state and they are the ones that update themselves, instead of delegating this to others.
With that said, let’s have a look at the answers of the questions we posed a few moments ago.
Q: What does it mean for a system as a whole to be reactive?
A: A system is reactive when two conditions are met:
- All of its components are reactive. This can be achieved by writing them to be reactive from the start or converting already existing code so it becomes reactive.
The latter is, to simplify, achieved by “wrapping” the blocking code into special reactive operators that move this blocking operation on a different thread and listening for any update on this new thread so that when the operation has completed, the result can be put into one of the reactive collections (Mono or Flux) and returned to the caller, exactly like a “native” reactive implementation.
- The logical and business models that compose the system are responsible for their own change. This means that there are no “logical setters” that change the module from the outside, but instead it’s the module itself that listens to events that occur in the system and reacts to them accordingly, usually via callbacks that are registered to be called when a particular event is received.
We will have a look at this more in depth in the following chapters where we will also provide an example of this property.
Q: Where do responsibilities shift in terms of module ownership in a Reactive System?
A: The shift of module responsibilities in the Reactive Model is noticeable and can be described as follows:
“Each module in the Reactive Model is responsible for updating its state and keeping it updated, reacting to events that are propagated from either the module itself or from other modules. This is in contrast to the classical approach of delegating responsibility of updating its own state to external entities by providing public setters/modifiers that is typical of a passive system”. (Source: PolyConf 2016: Speaker Andre Staltz)
The responsibilities have been put back inside the module itself, because it is that module that’s now responsible for itself and is not delegating this change to another module.
Ideally, in the Reactive Model, there are no setters because there is no need to change a module from the outside, because that module contains all the logic that’s needed to update itself, based on events that are propagated by the system.
This change is pretty important because it highlights one key concept of the Reactive Model: If you want to know all the places where some state is updated, in the “passive/classical” system implementation you have to look for all the setter method usage while in the “Reactive Model” you can just look inside the module and check which events it subscribes to and have this information centralized in the module itself.
Q: What is the key concept of a Reactive System?
A: If we have to choose one key concept to define a Reactive System, it would certainly be that it is implemented according to the “Reactive Model” and thus it is composed of a number of modules that are highly decoupled and communicate with the outside modules via events.
This means that the module, say, for the Cart Checkout does not need to know about the Analytics module in order to produce data that can be later aggregated and analyzed.
No, all the Cart Checkout module has to do is to broadcast a specific event, and whoever listens to this event (i.e. that subscribed to this event) will be notified and will react accordingly, for example by updating some analytics counters.
The key point here is that by separating modules and having them broadcast events lets us move the responsibility of updating the module state from an external entity to the module itself, via callbacks to be executed when a specific event is received.
This means that the module does not delegate any of its internal state logic to another module, but instead this process is centralized and can be easily understood (everything happens in one place) and debugged.
Q: How are cross-cutting concerns of a system addressed in the Reactive Model?
A: When there are shared business needs that are to be addressed, e.g. Analytics or Request Tracing or User Profiling, the Reactive Model suggests the creation of a dedicated reactive component that will consume all the relevant events for that business need.
Specifically, in the example case of Analytics, a solution might be that a new reactive component is created and this new component will subscribe to the already existing events of other modules that need to be addressed, reacting accordingly.
Continuing with our Analytics example, if we want to keep track of how many logins happen in our application, all we have to do is to create a new reactive component for the Analytics, and subscribe to the “login.successful” event and register a callback to increment a counter every time it is called.
This has the added benefit that we can disable the Analytics module all at once, by simply deleting the reactive component or not listening to any event, or yet register “no op” callbacks.
The emphasis is on the fact that a different business need (Analytics) can be implement starting from the already existing events that are used already by other modules.
This is because events are generic, and what makes them useful is what we do with them in our callbacks.
For example, the same event “login.successful” can be used by the “SecurityManager” module to unlock some parts of the application that are now accessible (e.g. by granting additional permissions) and it can be used from the “Analytics” module to increment a counter and keep track of this metric.
One of the challenges of the Reactive Model is creating the right set of events that the system needs to propagate, trying to strike a balance between a level of detail that’s too fine, broadcasting events that are so specific that they are useless and events with a “too coarse” level of detail, that are so generic that they are not useful.
For example, an event along the lines of “login.passwordWrongBy3Letters” is very likely useless, because we usually only care of the fact that password is either right or wrong, we don’t need to know by how many letters is wrong, that’s not something that we can use meaningfully.
An event like “login.password” is again very likely useless, because it does not convey any information on how the authentication process went, which is what we care about if we are talking about the login process.
When in doubt, it’s better to define more events that can later be grouped together rather than fewer.
This is because if we have more events than we need, it’s likely we are in the case of “too fine” level of detail case which means that some events will have the same logical meaning for us and our code will deal with these “event groups” in the same way.
Once we have these implicit event groups that are treated the same way in our code, it’s very easy to simply refactor these events that are usually grouped together to represent a single common event and adjusting our code becomes a matter of substituting multiple checks of events with just one check for the new event that we created.
If we define less events that we need, we are likely in the “too coarse” level of detail case which means that we cannot properly distinguish between some business cases that we need to handle differently, and when the time will come to do just that, we will have to create new code branches to handle the newer events that have been created.
Creating new code branches is a risky process, because we are multiplying our state space by the number of branches we create, which means that we are expanding the possible number of bugs and corner cases we introduce in our code, thus making it less robust.
This is not something we want to achieve, expanding our state space and increasing the possible bug count, so it follows that we should prefer a “too fine” level of detail that can be easily refactored later on rather than a “too coarse” level of detail, because of the difference it makes when we have to re-evaluate our design and refactor our system.
With this out of the way, and with an informal definition of the Reactive Model in terms of internal components responsibilities and propagation of change, we are ready to move on to the next chapter of how to design a Reactive System.
How to Design a Reactive System
The first step in designing a Reactive System is defining our requirements and checking that they align with the strengths of Reactive Model.
This might look like a trivial step, but it’s important to know that we are moving into the right direction and that our solution will benefit from the reactive design.
To name some of the benefits, they are:
- Non-blocking nature, which allows for highly parallel handling of requests and load
- Predictable load patterns thanks to backpressure
- Limited number of threads used to handle requests through the use of thread pools
- Response streaming capability
The above list is just the tip of the iceberg of the advantages that a reactive solution introduces, but it’s a good starting point to evaluate our needs and make sure that the work we are going to perform is well justified.
Once we established that we want to introduce a reactive solution for our system, the next step is different depending on which case are we operating on:
- In the case of a pre-existing system that needs to be updated/refactored/expanded
- In the case of a greenfield development for a new system that needs to be reactive from the start
We will have a look at each of these cases separately as they differ in terms of effort required, resources and way of approaching the codebase.
How to Make an Existing System Reactive
As the title of this section suggests, the idea is that we want to expand/refactor/rewrite an existing system that’s already implemented to follow the Reactive Model and thus be non-blocking.
This task is not easy and requires a deep understanding of the existing system, as we will need to change some logic that’s already in place to accommodate the reactive part that we will introduce, although we will try to keep changes at a minimum.
It is important to understand that it’s possible to convert just part of a system to the Reactive Approach, maybe sacrificing the event broadcasting part at the beginning, only if we rewrite a “vertical” subsection of it, that is, a part of the system that contains the full request handling code path, from the Controller to the Database.
For example, we might rewrite only one of the many controllers that already exists to make it reactive, but if we don’t also, for that controller, use a reactive database driver, a reactive data manipulation library and so on, we will not reap any benefit introduced by this rewrite.
This means that we can change part of a system, but we need to do so for the entire request lifecycle (in the case of a Web Application), otherwise any blocking part will break our reactive promise (link) and invalidate any performance gain.
The very first step is to make sure that the software stack used by the existing system is capable of doing non-blocking I/O and more in general is compatible with the Reactive Model.
There is no point in trying to convert a system if the underlying software stack that it uses does not support the Reactive Model.
This means that if the software stack that the project is currently using, intended as the set of libraries, operating system, runtime environments and generally everything that’s required to run the system, does not support non-blocking operations, callbacks and thread pools as a minimum, we cannot do anything to convert it.
In this unfortunate case, the step before the first, let’s call it the step zero, is to migrate this existing project onto a software stack that supports the Reactive Model, updating libraries and making sure that the underlying operating system and runtime environment is capable of non-blocking operations.
In the case that the existing software stack supports the Reactive Model, the next step is making sure that we have a good test coverage (where possible) of the modules we are going to rewrite.
This is because we don’t want to rewrite parts of tried-and-tested logic that stood the test of time and introduce regressions or possibly forget to cover edge cases that were not documented, and to avoid this, we are going to need a solid testing coverage of the module we are going to touch.
As we know, good test coverage is not always possible, so if we don’t have it, and we can’t add it for any reason, we need to be extra careful when touching this module, and be sure to start adding new tests with our reactive code.
Now that we are ready to update this existing module, the next step is identifying the data flow and the relevant events that need to be broadcasted to interact with this module.
This step can be split into two parts to make it easier, and avoid introducing the event broadcasting at the same time that we rewrite the code to be reactive, to simplify the process.
What does it mean to identify the data flow? Why is this necessary to build our Reactive Pipeline?
The data flow is defined as the code path, in terms of modules, that the application data traverses during the request lifecycle.
This code path is what we need to define and it will be the heart of our Reactive Pipeline, because it will contain all the reactive operators that will modify our reactive collection until we have performed all the necessary work and we can return this final version to the caller.
There is not much of an advice that can be given on this step without analyzing each rewrite on a case by case basis, but the general spirit is that it is necessary to identify what kind of data is needed and how this needs to be transformed to obtain our final result.
For example, when rewriting an eCommerce system module like the “Invoice Module”, the data flow might involve the “Cart Module” (when an item is added to the cart its price is also added to the Invoice) and also the “Coupon Module” (when an item changes its prices this also needs to be reflected in the invoice).
This high-level dataflow also needs to go into more detail on what exactly happens inside the “Invoice Module”, likely checking that some items are correctly billed (as they may have different rules based on destination country, VAT, etc.) and also doing all sort of business checks that are required.
Once we have our data flow roughly defined, the next step is identifying the operations that are needed to obtain an invoice (in this example) from a series of prices.
Remember that what we receive is a Reactive Collection that is updated in real time, so every price change is reflected immediately on our code.
For example, we might need an operation in our Reactive Pipeline to group and sum the prices to get the total for the invoice, and we might also need an operation to convert all the prices to the same currency if they are not already, and so on.
Once we have an initial idea of the data flow, the format of the data and the kind of operations we need to perform on this data, we can start implementing our first version of the Reactive Pipeline, and start converting our existing code according to the Reactive Model.
There will be instances in the module or in the system where the code cannot be rewritten following the reactive approach (for any reason), and for that cases the solution is to offload these blocking operations to another thread and listen for updates on that, converting the return value of that operation into a Reactive Collection.
We will go into more detail on this later during this series, but for a quick look at the topic these links are a good starting point: Mono from a blocking operation and Flux from a blocking operation.
Now that we have a basic implementation of the Reactive Pipeline, it’s a matter of iterating on it, challenging and revisiting the assumptions made during the design phase if these conflict with the real status of the system and tweak the data representation (maybe some value could be a Flux instead of a Mono, for example) and change some operators (maybe we can pre-process this data in another part of the pipeline and save us some complexity here, for example) to achieve our desired result.
As with everything, it’s a matter of experience and trial and error when converting an existing system to the Reactive Model, but it’s worth because the payout outweighs all the problem that we might face during the rewrite. This is true only if we follow the precautions mentioned above, because otherwise we might introduce more issues than what we solve by rewriting the system.
How to Design a New System to Be Reactive from the Start
In the case that our implementation is greenfield and we have the freedom to start designing and building our system to be reactive from the start, our task becomes significantly easier.
This is because we start already with a software stack that’s reactive, and we also design our system in terms of data flow, reactive operators and event broadcasting, simplifying our implementation and making sure the architecture comes up right from the start.
The very first step when designing a Reactive System is understanding that, according to the Reactive Model, each module is responsible for its own updates and keeping its state updated.
This means that we need to be able to register callbacks on events that happen on other modules so we can react to these changes accordingly, updating the internal state of our module and taking responsibility for doing so.
There is a shift in terms of responsibilities that comes with the Reactive Approach, and that’s because now, if we imagine our system as a series of boxes linked by arrows, where the former are the business modules and the latter are the operations that happen on a given module, we can observe that:
- In the classical/traditional system design, the “passive” design, a module initiates a change on another module by calling a public method on the destination module with all the required data that are needed to update the state of that module.
This means that the action happens at the start of the arrow, it is the source module that is responsible for changing the destination module, which just stays there waiting to be updated from an external entity, because it delegated its logic for state management to other external entities.
This is why this type of design is called “passive”, because each module just waits for some external entity to act on it via a public method and initiate a change.
In the eCommerce example, this could be described as the “Invoice.updateInvoice(product)” method call on the “Cart Module”, where it’s the latter that starts a change on the “Invoice Module”, changing its internal state.
- In the Reactive Model, the system design becomes, unsurprisingly, “reactive” because now, each module is responsible for updating itself and keeping its state update, there is no delegation to other external entities.
In terms of boxes and arrows, the change now happens at the tip of the arrow, because it’s the destination module that is subscribing and receiving an event from another module, and it’s the destination module that’s acting on this event, changing its internal state in response to something that happened outside its boundaries.
In the eCommerce example, this might be described as the “Cart.onProductAdded(Invoice::updateInvoice)” callback registration that happens in the “Invoice Module”, which is responsible for updating itself whenever a relevant event is received, and in this case it’s an event from the “Cart Module”.
This is the meaning of a Reactive System: Reacting to changes and centralizing in the module itself its logic for state management inside the module itself, without having to delegate this to external modules and thus exposing public methods to change its state.
This way of looking at our system is powerful, because it gives us a new point of view that we can leverage to help us guiding our decisions when defining our modules, components and the Reactive Pipeline.
We should strive to model our components as a series of pure functions (methods that have no side effects) that operate on reactive streams of data, which are grouped into modules that broadcast and consume events from all around the system.
Now that we wrote our modules, defined our events and registered our callbacks, it’s time for reviewing the testing we wrote and making sure we cover as much as we can.
On this topic it’s important to focus on the event propagation and on the end result of a given Reactive Pipeline we want to test, since that’s all that happens in our system.
Providing a given input should get us back an expected reactive stream at the end of the pipeline, and we can write tests for that making sure that the end result contains the expected number of items, with the correct properties and so on.
This testing is very powerful because we can pinpoint exactly what part of the reactive pipeline fails when our tests fail too, because we see that a given event went in and at some point in the operator chain our reactive stream “got corrupted” (we get back the wrong elements) or it “got lost” (we receive an error signal because something failed down the pipeline).
Being able to say “this pipeline fails when this reactive stream is fed into it because of this event is received” is a good starting point when debugging failures in tests and in production because we can recreate the exact same scenario and, since our components strive to behave like pure functions, we can be sure that the same reactive input stream will always trigger the same error.
As every developer knows, the first step in fixing a bug is to being able to repeatably reproduce that bug to observe the state of the system multiple times and see where something goes wrong, and the Reactive Model gives us exactly that, reproducibility through the use of components modeled after pure functions, free of side effects.
There is only one type of scenario where feeding the same reactive stream into the pipeline can give either an error or a result, and this is when the pipeline itself is not composed of pure functions and thus one or more of its operators have side effects (basically depend on other systems like APIs or DBs or external resources).
While this is undesirable, it’s clear that at some point our system needs to interact with the external world and this is going to produce side effects, and these are going to make our pipeline not behave like pure functions, which in turn hinders debugging a bit.
But it’s important to remember that this does not nullify the benefits of the Reactive Model, but instead introduces some amount of uncertainty when it comes to the debugging process, because now we need to know not only the state of our pipeline but also the state of the external system that we communicate with in our pipeline (if it’s operational or not) to be able to assert that the error is in our code or not.
The effects of depending on an external resource in the Reactive Model can be minimized with some considerations on the design of the modules.
Basically what we do is to “confine” all the interactions with external systems to components that are as far away as possible from the core of the business logic, leaving that part modeled as a chain of pure functions, with all the added benefits.
Furthermore, each time we interact with an external system, we can limit this interaction as much as possible, hopefully having just one per reactive pipeline, so that when one fails, we know exactly what went wrong and where, and can immediately check the operational status of the external system instead of debugging our code, and this is because all the other components in the pipeline are pure functions and if they worked before even once, they will work again under normal conditions (no broken external systems).
In conclusion, designing a system from the ground up following the Reactive Model is a different approach from the traditional way of designing “passive” systems, but with some experience and good software architecture, the resulting reactive system can handle many more times the load of a “traditional” system and also serve each request in a shorter timeframe, given that we don’t block anywhere in the system.
As already stated before, there’s also the added benefit of having increased clarity when debugging issues if we modeled appropriately our component and kept our business logic free of side effects.
When a Reactive System is The Right Choice
Given the stated benefits of the Reactive Model and the praises that are attributed to Reactive Programming, it might seem that going reactive is always the best choice no matter what our requirements are.
While it’s true that adopting the reactive paradigm will win you some sweet performance gains and, if done well, will also aid the developers in the debugging process, there are a number of cases where it does not make much sense to adopt the Reactive Model and in some cases might even be counterproductive.
We will have a look at these use cases where it does not make sense to adopt the Reactive Model because the losses would outweigh the wins, and this is due to the fact that, as stated at the start of the article, this particular approach requires some changes (also at the organizational level, even if minimum) to the existing body of practices that are already used inside the company.
So, if you or your company falls into one or more of the cases described below, adopting the Reactive Model is probably going to cause you more issues than it would solve.
If you want to go reactive, you need to get out of these cases and make sure you prepare the right environment to make it happen.
The cases, starting from the top of the organization:
- Your Product Manager/Product Team demands and requires very strict deadlines and there’s no possibility for delays under no circumstances.
- Your Software Architect does not buy into the Reactive Model because of the added complexity of introducing Functional/Reactive Programming into your existing practices (that work well already).
- The developers on the engineering team don’t want to learn and make the effort of being uncomfortable with a new technology that’s so different, and thus they don’t really put the time to make mistakes and learn from them.
- Your software stack does not allow reactive, non-blocking I/O and you cannot change it.
- There are some libraries that are old/non-reactive and wrapping them to be reactive is too much work.
Expanding a bit on these points, we can see that except for point 4 and 5, most of the issues come from the people and not from the technology, and this is to be expected because the Reactive Model brings a lot of change to the tried and tested procedures that are already implemented into an organization, and not everyone likes learning something so different and feeling like a beginner all over again.
Starting from point 1, if the management requires precise deadlines and the engineering team cannot delay any of them, it’s very hard to adopt the Reactive Model, unless there’s a substantial experience in the field from the developers that can counteract any unforeseen issue that can come up during development.
If the developers are not very experienced in the Reactive Model, they might face some issues that could delay them for longer than it’s allowed and push a deadline past its due time and this is not permitted (as stated in the premise of this point).
The truth is that 99% of the time things will go smoothly and the developers will start to reap the benefits early on, but there will be some instances where there are problems, either with the underlying stack, or with some external system or with some old libraries or with the comprehension of the Reactive Model itself that need time to be properly addressed, and this would be fine if not for the fact that, as stated above, there is no extra time to spend on learning/adjusting things.
In this case, it’s better to go with the “old and boring” technology that’s well known also in the failure modes, thus avoiding unexpected issues that might pop up. With these product teams, it’s all about the devil you know.
Moving on to point 2, unless you are the architect in question (at which point I would kindly suggest to read our first article linked at the start of this piece and follow our developing series), it does not make sense to go against your direct boss when it comes to these kind of things.
Of course, you should argue your point and try to convey the benefits of this Reactive Model, but if your team is not all on your side, it’s difficult and you risk vouching for an alternative that your boss doesn’t like already and not all your team wants to try and embrace.
In the end we need to remember that we are part of a team and we cannot put our needs above those of everyone else, we need to do what’s best for the team as a whole.
This is not to say that all hope is lost, far from that, what you can do is maybe show them, with a small project that needs to be implemented anyway, how manageable this complexity is and implement it yourself following the Reactive Model, with the permission of your Architect/Lead Developer/Boss.
If you are lucky, they will be convinced that the complexity of this new Reactive Model is manageable and it might be safe to gradually adopt it.
If you are unlucky, well, you did your best and should be proud of it, and don’t get discouraged but instead take comfort in the knowledge that we are hiring and we don’t have any boss that’s opposed to Reactive Programming, in fact you’ll write lots of it here at Itembase. If this does not scare you yet, please let us know your general details at: ramo.k[at]itembase[dot]com
Point 3 is a very common case, in my experience, and it revolves around the fact that not everyone like challenging their own knowledge and established practices to learn something so new and different.
And this is perfectly fine, we are all different and that’s what makes us unique, but in this case, if the developers don’t want to put the effort in to learn and make mistakes and be a little frustrated with a topic that changes how they wrote software, there’s nothing you can do to make them happy to do this.
You, as an Architect or Lead Developer, could “force” them to adopt this Reactive Model, but then, most of them will probably come to hate it, and that’s not something you want, developers working with something they don’t like and that they are not comfortable with, and at the first sight of unforeseen problems, it will all be blamed on the framework/functional style/reactive model whether that’s true or not.
In this case, if you can, lead by example and not by “force”, and if the developers are not willing to follow, well, just accept the situation and put this aside for a while, maybe with enough time they will come to appreciate it.
Points 4 and 5 are self-explanatory and there’s really not much to say.
This second piece on the Reactive Programming closes the circle as an introduction to the topic, on what it is, why you should use it, when it’s good to do so and how you can safely adopt it without having your organization turning on you because of all the changes that come with it.
On a more serious note, we at Itembase adopted the Reactive Model out of necessity and we managed to make good use of it, and our microservices are more performant than they were before, and we are able to process high volume of data incoming into our systems while staying inside very specific limits and constraints that are dictated by the space where we operate, the eCommerce sector and Financial Institutions and Logistic Sector as well, and more specifically by a sizeable part of it, that’s called the Connectivity Framework.
Just two words on this, the Connectivity Framework is the response to the Connectivity Problem that can be explained by the question: “Given the rising number of incompatible eCommerce platforms, how can I integrate with them, which ones are worth integrating into and how can I manage this ever increasing complexity that comes with all these integrations across all departments participating in the growth/scale of the company by using connectivity in Sales, Marketing and IT?”
The Connectivity Framework is the solution to this challenge and Itembase is one implementation of this solution.
We will manage and hide this complexity from you, acting as an “access point” to this vast number of eCommerce platforms by providing you a single, standardized point of entry that you need to manage, reducing your integration complexity from potential N eCommerce systems, to just 1, Itembase.
In the next articles of this series we will explore the topics related to Reactive Programming and we will have a deeper look at what comes with it, in terms of programming itself (Threads, Cooperative Scheduling, Worker Pools) and infrastructure (Reactive + Docker, Distributed Logging, Tracing), plus a variety of other interesting stuff that comes up when implementing the Reactive Model, that we didn’t have time to talk about yet.
Stay tuned for the next article next week!
<< Reactive Programming: What & Why? >> Reactive Programming: Performance & Trade-Offs
Request a Free Opportunity Assesment
Nurturing and Building an Ecosystem around your Magento App and Integration.
Introduction In today’s rapidly evolving eCommerce landscape, businesses are always looking for ways to differentiate themselves from the competition and deliver a superior customer experience. For many businesses, this means leveraging Magento, one of the leading eCommerce platforms on the market today. However, simply having a Magento app or integration is not enough to ensure…
An App Store for Super Apps
The pandemic saw ten years of merchant technology adoption in three months. That has cooled a bit but there is really no going back. If you work in eCommerce, whether it be B2B or B2C, you know this statement is true. A key driver of BigCommerce’s scaling initiative is the “super app” trend, where payment…
An Integration Journey
After watching 100s of companies launch eCommerce Integrations and App, a pattern started to emerge. Most companies went through the same phases and many did the same mistakes or faced the same problems throughout their journey. Understand your Target Market Identify your target market and conduct market research to understand their needs and preferences. This…
The Ever-Growing Importance of eCommerce Connectivity
Expanding to multi-channel sales is critical in eCommerce. Selling across different platforms results in as much as 3X the engagement as selling on a single channel. Multi-channel sales rely on synchronized data so it requires each eCommerce platform to communicate with each other in as close to real time as possible. Check out this Anchor…
Essential B2B SaaS KPIs to Follow
I thought Id share this Popupsmart blog article on 3 essential B2B SaaS KPIs to follow. For most SaaS products and services, their most significant expense is the cost of acquiring a new customer. An excellent way to reduce this expense is by channeling resources and efforts into acquisition channels with the highest return. At…
5 Customer Acquisition Channels for B2B Apps
Check out this concise article on the Shoffi blog on five principal customer acquisition channels for B2B apps. As I mentioned on previous emails, the B2B SaaS market is getting more and more sophisticated and its not getting any easier out there. More and more apps are competing every day to attract the most traffic.…
Winning in the B2B SaaS Market
The B2B SaaS market is getting more sophisticated every day. A focus on intuitive, consistent and one-click experiences in UX design has substantially altered what users now expect from their software and has conditioned SaaS buyers in the enterprise to expect a slick, consumer-grade experience. Here is a short Forbes article that recently got my…
Hurricane Commerce, Enhances its Cross-Border Data Solutions with Itembase Connectivity
Hurricane’s Aura solution providing its landed cost engine, prohibited and restricted goods screening and denied parties screening and its Zephyr data enhancement tool are available to customers globally via the Itembase platform.
A Guide on Targeting B2B Customers
There are hundreds of eCommerce platforms out there and more are coming online every month. Supporting multiple platforms is a major challenge, even though each new integration provides new marketing channels that can be leveraged for increased awareness. Understanding the nature of each platform and how you can position yourself for maximum exposure is extremely…
B2B SaaS Marketplaces Opportunities
I recommend quickly checking out this Rocket Gems blog article highlighting 68 B2B SaaS Marketplaces Opportunities by category. The list is extensive but by no means complete as there are hundreds more marketplace opportunities out there. The list ranges from more fully featured marketplaces that have advanced features, to more basic “integration directories.” These platforms…