Contents

Why JavaScript on backend is bad for enterprise

Once you have a hammer, everything is a nail, but actually, you are holding a broken stick while trying to convince everyone that it’s a hammer because it can bash nails, and it gets the job done… eventually. And you’ve been using that stick for x years on so many things, it served you perfectly, it got the job done, and you became proficient with it. You know that, when others doubt it’s power that it’s just their lack of skill that’s the problem.

Goal of this article is to share my experience and to serve as a heed of warning ⚠️ before you decide to base your enterprise server application and other services on JavaScript. In the end of the day, you make your own choices and I wish you good luck with them 🍀 .

Enterprise application is software that is designed specifically for large, corporate (and often complex) environments.

TLDR;?
If you want to skip a bit of JavaScript history, my experience and jump directly to the arguments, click here.

JavaScript history recap

/posts/javascript-backend-is-bad-for-enterprise/old-history-book.webp

JavaScript is a highly dynamic, just-in-time (often) compiled scripting or programming language that conforms the ECMAScript standard. It’s dynamically typed, prototype-based object-orientation and has first-class functions.

JavaScript is core technology of the web, alongside HTML and CSS. It allows you to add functionality to web pages, like button actions, on mouse events and similar.

In 1996. JavaScript was submitted by Netscape to Ecma international for a standardization for all browser vendors. These standards continued with ECMAScript2 in 1998., ECMAScript3 1999., ECMAScript4 began at 2000. and at that time Internet Explorer market share reached 95% which solidified JavaScript as the de facto standard for browser client scripting. ECMAScript 5 standard, released in December 2009. and ECMAScript 6 in 2015.

Node.js was created in 2009. by Ryan Dahl, which caused a significant increase in the usage of JavaScript outside of web browsers. Node combines the V8 engine, an event loop and I/O APIs, providing a stand-alone JavaScript runtime system.

Reference for more on is on Wiki.

A little bit about me

/posts/javascript-backend-is-bad-for-enterprise/legacy-worker.webp

Image above showcases my work experience, as most developers (at least in Serbia) get the lovely chance of maintaining badly designed legacy projects (cleaning up the mess), with minimal opportunity to create new software with their design choices, because that is reserved for CTOs and other “important” people of the company engineering team.

I’ve worked with quite a bit of programming languages and projects during my college days like Java, C#, ActionScript, PHP, JavaScript) and tend not to mention them, because I consider most work done in college is introduction to the language, simple tools and basic usages. They are not deep with real practical work and how thing are done “in the wild”. Most interesting thing worth mentioning from college was my graduation project was developed in Node.js LoveIt NEW | 0.11.x (year 2016.) as a web CMS application for a student organization with all the popular tooling like MongoDB, Angular, RabbitMQ, Docker, Nginx, which was scaled with a separate chat server. These were the early days of Node.js, and it barely saw any use in production, it was mostly PHP servers, a bit of Java and C#. So the graduation project allowed me to start early with JavaScript on the backend and gather a lot of experience and knowledge how different programming languages worked.

What attracted be me about Node.js was the efficient concurrent model showing that it could handle high number of requests on a single small machine with efficient usage of memory and CPU, especially if you had websocket chat to implement for many clients.

Async I/O model
I wrote an article about showcasing how it works with comparison to traditional and other models of concurrency.

A lot of developers jump in Node.js and backend coding because they already know language for front-end and use it on back-end, for me, it was because of its concurrency model what mattered the most.

From there, professional projects I worked on included PHP with (Laravel, Symfony, CodeIgniter, Phalcon) or without a framework, bare-bone Node.js projects with Express (latest Nest.js), Java with Spring framework and Quarkus and some being personal projects with Go.

Early in my career I’ve touched front-end quite a bit, from vanillajs and jquery to Angular1, React and Angular2, but that’s not much related for this back-end programming topic (even if you think of code sharing, don’t, it’s just awful, and you’re being too lazy).

I think every programmer should know at least 2 or 3 programming languages, 1 strictly typed compiled and 1 dynamic, to better understand how things actually work and what are the benefits/downsides. We use a toolbox of tools, not 1 hammer to rule them all.

Lately, I started to put time and focus only in Java or Go related work, in order to focus on other tools in the system as I see them being the most productive, efficient and easier for maintenance. This is a language focused article, but just for brevity I’d like to add that I focus on the data domain which includes database design, event driven systems, microservice design, real-time processing, reliability and scalability.

Why I’m saying all of this is to show my varying background, transition and thoughts when selecting Node for the first time and experience I gained from it from over 8+ years of programming.

Popularity - the good?

/posts/javascript-backend-is-bad-for-enterprise/language-popularity.webp

Every language that has a community has good parts, otherwise why would people be using it, right (except maybe Ruby Haven’t worked with it, but fail to see what advantages it provides over any other and consider languages without strict typing to be awful for enterprises). My guess is that, because it had tons of jobs in it and is was easy for beginner programmers *cough, *cough PHP to start with.

There’s also a thing where people like to say “language x is dying”, but in 98% of cases is just their hate for the language. Language dies when its community dies, otherwise it’s still alive and kicking.

But before going to why it’s popular, just want to make one thing clear from my end

I think JavaScript is a wonderful starter programming language for anyone, you start and get productive fast and can do soo many things with it in soo many ways. After it, you will gain a great view/perspective on other languages.

⚠️ Just to repeat, this article focuses on the backend!

After talking with quite a bit of people through these past years about Node, there were 2 important reasons why would someone select it for their backend that I noticed re-appearing:

  1. It’s JavaScript (most prevalent)
    • they already knew it and just wanted to program in backend
    • easy for full-stack devs
    • management wants front-end devs to work with back-end also
  2. I/O efficiency (rare), high concurrency throughput and low memory usage than traditional thread per request apps
  3. Easy concurrency (rare) for junior developers or those who worked only on front-end

There were already a ton of people working with JavaScript, and a little bit after some frameworks (or just router libs) appeared, a ton of people jumped to Node.js for backend programming as it was quite easy and everything was familiar. It’s lack of libraries got people hooked on creating missing libraries and the ecosystem grew quite fast, especially since everyone was re-using other libraries to speed it up (which would later prove highly problematic, more on that later at dependency ecosystem chapter).

Creating an HTTP server was easy

Current way of creating a server.

// server.mjs
import { createServer } from 'node:http';

const server = createServer((req, res) => {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello World!\n');
});

// starts a simple http server locally on port 3000
server.listen(3000, '127.0.0.1', () => {
  console.log('Listening on 127.0.0.1:3000');
});

// run with `node server.mjs`

And if you wanted to add a counter for counting request, you would just need to define a variable and increase its value each request. Because it was single-threaded design, you wouldn’t need to worry about race conditions like in most other languages.

People would tend to say that this is easier and thus much better than in other languages, but when I asked them about the comparison they would compare it to a framework that contains tons of features and tools for quick productivity bootstrap. It was literally an oranges and trees comparison.

Later down the line, after a while of having SPA (single page apps) on front-end side, there started to appear more frameworks for rendering those SPAs on the backend with Node.js, like a hybrid between SPA and older template rendering of MVC server apps. Thanks to Node being on server it was possible to execute some JavaScript on the backend. Some of the known are: Next.js, Angular, Nuxt Svelte kit, Remix, Astro, Vite and so on. As you can see, quite a bit.

Summary, it was easy and convenient.

People in general and including developers are soo attracted to shiny new things, they don’t think much about real usage, pros and cons it brings, what does it try to solve. They see a pretty website, stylized and colored in an eye pleasing way showcasing a hello world example and a couple of lines of simple configuration and they immediately think: “this is the best thing, my code will be clean like that” most of the time without even looking at the alternatives and doing a comparison.

The sheer amount of times when I asked for valid arguments on: why did you choose x over y, I got an answer in the form of: because I think it’s easier/better, or it “looked good” or they couldn’t think of some and asked why shouldn’t they. That was the argument for them, they just don’t know or don’t care and are just using it because they got the job of doing it and that’s all that matters.

The bad and the ugly

/posts/javascript-backend-is-bad-for-enterprise/language-safety.webp

Interpreted language that was designed and created in a really short period to be run in a browser in 1993., after a ton of patches, was decided at one moment in 2009. to be put on a server, what could go wrong…

Now, it was a great idea to utilize the event loop and async concurrency model that JavaScript had to get the maximum efficiency out of IO and events. This was its strength, but there is a huge difference when you plan and design a language for a domain or just porting it over as a patched monster into a completely new environment with another patch.

At the start when you learn a new language, each language feels fantastic and refreshing, it is only that after you spend quite a bit of time that you start noticing its problems, and JavaScript has a lot!

Increase in developers, reduction in skill bar
It’s important to note that there was a huge increase in demand for software engineers which reduced the bar of skill quite a bit as companies couldn’t choose much when trying to find a person for the project, thus they would try to get person with lower skills just to get someone working on the project and not waste time. This influenced those hired persons to also start thinking highly of themselves and not looking into improving, as they already reached “the top” in their mind, and would follow their principles as: “how things should really be done”.
One language to rule them all

Quite a bit (or maybe a lot) of people pick 1 programming language and stick with it for life. On one end, I understand, not everyone wants to burden their life with too much programming (we got life after work, right) and learning a new language is time-consuming, but then again you can’t expect to advance to a Senior software engineer position or make high level technical decisions, as your knowledge is quite limited, your way of thinking is a closed box and can’t vouch much for engineering direction. I had a case where a JavaScript dev never heard of a thread when I mentioned it to him.

Again we go to the story of a silver hammer.

The list bellow is grouped in chapters to better focus on the issues of the language, as it has many like it’s design, the build system (or systems) as it had soo many (and still has), the ecosystem that it flourished and how maintaining projects is actually in practice.

Dynamic and weak

/posts/javascript-backend-is-bad-for-enterprise/wrong-math.webp

As said in the beginning, the language is a high-order language, interpreted and dynamically typed. This is completely opposite to strictly typed language, meaning that a variable of type string can become an integer or array or boolean or object and so on. It is a weakly typed language which means certain types are implicitly cast depending on the operation used and it’s operands, for example here is a table of outcomes:

Left operandOperatorRight OperandResult
[] (empty array)+[] (empty array)"" (empty string)
[] (empty array)+{} (empty object)“[object Object]” (string)
false (boolean)+[] (empty array)“false” (string)
“123”(string)+1 (number)“1231” (string)
“123”(string)-1 (number)122 (number)
“123”(string)-“abc” (string)NaN (number)

What this means is that, it’s easy for errors to sneak your way in your code, it might not be the first function that expects a number, but gets an array to fail, instead it could be the third function down, you cannot know. Or it might not fail, but get into a bad/dirty state making it really difficult to see where the problem originated and cause problems elsewhere later.

With type strict programming languages, you cannot get into that situation because when a cast fails, you will immediately get the error of where it failed at compile time, while JavaScript will fail during runtime. This means that for runtime you will need to execute all code and cases in order to catch any errors, this is why it’s said that with JavaScript you test in production.

Isn't this problem gone now that we use TypeScript?
Now some say, this will never happen as modern JavaScript is written in TypeScript and the transpiler will warn you about those errors and fail to transpile. Well that couldn’t be more wrong, in the end TypeScript is transpiled to plain JavaScript, it’s just a helper, you’re still using JavaScript, just preprocessing it. We will talk more about this in the next section JavaScript supercharged! , for now stay with the JavaScript only.

For example, what do you think would happen if wrong types were passed to this function? (or missing)

function calculatePercentage(quantity, percent) {
    return (quantity * percent) / 100;
}

Now imagine, a huge function with 200-400 or more lines of code with a dozen of arguments, where some are objects and some are array of objects. You have no idea what is inside of those objects and what types they are or can be, and the same for the array (which sometimes might not be an array and could be just an object). In that case it’s hell to figure out what fields each object has, and you need to dig through all usages of the function and hope the caller function doesn’t have the same problem as that one or God help you is not an API call against an un-documented service. Not even an IDE could help you there. Then you had developers who love mutation functions which create or change fields of the objects, and there’s dozens of them, it all becomes a maintenance nightmare.

Real world projects
Be aware that most projects are ancient legacy code that can span to 10 years or more, someone who created that is probably not there anymore and no one is sure why some strange logic is the way it is. You also need to be careful not to break it as it’s already used in production heavily.

It just hurts your brain thinking about all the possible cases, lets hope you were aware of such things and added safety nets like type checks:

function calculatePercentage(quantity, percent) {
    // I'm lazy to write separate checks, so we won't know which of these 2 variables was actually at fault
    if(typeof quantity !== 'number' || typeof percent !== 'number') {
        throw new Error('Invalid type provided, expected number!');
    }
    
    return (quantity * percent) / 100;
}

And this is cumbersome to do every time and trust me, extremely few developers do such checks. Most if not all of them program in a happy path workflow, because it shouldn’t ever happen, no?

A bit of sidetrack, I had developers who would spend meetings discussing how we should create utility functions like isNumber(value) instead of typeof value === 'number' because it saved space or something. There’s a ton of devs who don’t focus on work and real problems, instead they drift on subjective artistic code expressions that mean nothing in the long run (even in immediate moment) and just waste time.

Some developers (but also really rare) would just go with jsdoc to document the types and let your IDE (or editor with plugins cough* VSCode) show you warnings, hopefully ( you can also specify if a value is optional, has a default or supports multiple types like both int and string ).

/**
 * 
 * @param {number} quantity
 * @param {number} percent
 * @returns {number}
 */
function calculatePercentage(quantity, percent) {
    return (quantity * percent) / 100;
}

Though, nothing prohibits someone not passing a wrong type to the function, or we take input from somewhere without verifying the type.

As said, developers who properly use jsdoc are rare and most won’t update the docs if they change something and it becomes a hustle constantly reminding devs to add or maintain these docs.

Even if you are alone and write the most perfect code and somehow keep everything perfectly tracked in your brain (I have no idea how you can for anything that’s not small), you will get problems from the following inputs which you need to control:

Using libraries

There’s a whole chapter here for them later, but a quick summary: You don’t know, and you cannot trust that you will get what you expect from them. As the language isn’t type strict, nothing is prohibiting someone from making a mistake and returning a completely different type.

App input

When someone sends HTTP (POST request) data to your app, you need to validate it using a validator, otherwise you might get anything into your system like a poison pill that will at unknown moment start causing problems. Also, even if you do put validation you might have forgotten to eliminate all the excessive fields that you don’t care about and someone down the line might just use the spread operator or pass the whole object with extra data do the database to get saved or to another service or function that loops through all the data of the object and gets into those new unknown fields. And then there still could be a problem, because you accidentally used the wrong annotation of the library for transforming the field or the library has maybe a broken function for that type, or can’t properly handle nested array objects, etc.

There’s also a notion when you receive data when making an API call to another service for example, you need to validate that data as well and eliminate extra fields, otherwise it might become a poison pill to your system. APIs don’t always work as intended, sometimes they return strange data/response.

Same goes for connecting with event systems like Kafka, RabbitMQ or tools like Redis, basically anything outside your application.

Don't you need to the same with strict type languages?
Kinda, in type strict languages you also need to be careful about input from outside, but far less than with JavaScript.

Trusting other developer functions

There will be functions which are huge or parts of the code you don’t know that could or could not have jsdoc and even if they did, other developers could have made mistakes somewhere in some cases that would produce an invalid type for your function as an input, then you would need to do the same validation as above in App input, but most developers will just shrug it and say “not my problem” 🤷‍♂️ and hope for the best 🍀

Playing with the code

There will be always quite a bit of developers that want to play with the design of the code or the whole app, either they are bored, want to desperately try that design pattern, want to reduce code or just think that something is prettier if written in their specific way.

Because JavaScript almost doesn’t have any rules, they can do all kind of interesting things like:

class AbstractWorker {
   constructor(event, cfg) {
      this.raw = event;
      this.cfg = cfg;
   }

   // In fairly old legacy code you will have plenty of callbacks intertwined with generates and maybe a bit of promises.
   doD(cb) {
      // do some secret stuff
   }

   execute(cb) {
      // Reads the configuration and executes all the methods specified in it
      // They would usually have the async library with a callback waterfall utility function or similar
   }
}

// If you worked with quite old legacy, you would get prototype type of inheritance
class Worker extends AbstractWorker {

   constructor(event, cfg) {
      super(event, cfg);
      this.evt = event;
   }

   doA(cb) {
      // manipulate and mutate this.event...
   }

   doB(cb) {
      // re-create the this.event...
   }

   doC(cb) {
      // transform and trigger other services or persist...
   }
}

const config = [
   'doA',
   'doC',
   {
      // Method will only be invoked if this condition is met
      condition: (evt) => evt.important === true,
      method: 'doD',
   },
   'doB'
]

function handleEvent(event, cb) {
   const worker = new Worker(event, config);
   return worker.execute(cb)
}

// Exporting like this limits exporting anything else from the file and confuses the IDE as this exported function
// can change name where it's importated to anything else, and quite a bit of developers change name out of lazyness
// which makes it hard to track in the codebase.
module.exports = handleEvent;

Instead of just having functions or methods accept and return a value to another function, they’re doing mutations and specifying through the config which methods should be called. These configs existed because the same class would be reused for a different event handling. There are far more worse things than this by the way.

Evolution

/posts/javascript-backend-is-bad-for-enterprise/all-directions.webp

For some reason, developers focus on the latest state of the language and disregard the past completely, even though there’s a high chance they will work on a project from that past that’s still stuck there, or fail to notice that some implementations are still done in the old design and most of the language past is still within the language.

JavaScript, as said before, was designed initially for the web, it went through a lot of changes, and it doesn’t really care much about backwards compatibility while doing so. This can especially can be noticed in old libraries or old projects, these things stick in the ecosystem of the language, and you will encounter them at one moment for sure.

Concurrency

Concurrency changed quite a bit in JavaScript, at the start you would only use callbacks to handle the I/O actions, like:

function myFunc(id, cb) {
   myApi.fetchUser(id, (err, user) => {
      if(err) {
          return cb(err);
      }
      
      return cb(undefined, user);
   });
}

Now this is a simple example, but in complicated cases you would have much more of these, even nested and interesting bit that some developers don’t know and some forget is that if an error would occur which was not passed through callback , plain error (like accessing of undefined), it would stop the propagation through callbacks. It would become an uncaught exception. The function that called the other function would not get the callback to handle it properly and the system would just stop working at that point, imagine it was a queue processor and all of a sudden stopping without any notice.

So you would need to always do something like this:

function myFunc(id, cb) {
   try {
      myApi.fetchUser(id, (err, user) => {
         if(err) {
            return cb(err);
         }

         return cb(undefined, user);
      });
   } catch (err) {
       // maybe even log the error
       cb(err);
   }
}

There is still a popular library for easier management of these callbacks (async), but still the code ends up being messy and prone to errors.

Even if you are working now with Node.js you will encounter this on legacy codebases or older libraries

So, when you start checking a library, you might encounter it having different code in different place, which was due to fast evolution, but an evolution that wasn’t standardized nor careful, and the community was in a hurry (as always), and because of it there were many libraries that provided the Promise class to replace callbacks as a better solution. You had the Bluebird and Q, just to name a few.

And, it was better than callbacks, but still a bit harder to track:

function myFunction(id) {
    return myApi.fetchUser(id)
            .then((user) => {
                // do something with the user
               
               myService.updateX(data)
                       .then(() => {
                           // because we want to return the user, not the result of updateX
                           return user;
                       })
            })
}

It can be of course made simpler, with function references only in then but sometimes you need to aggregate data from other calls, and it becomes messy. Plus, imagine if you are working on legacy and have quite a bit of code with callbacks, then you would need to adapt those promises and trigger callbacks, it would become disgusting.

Promise could also be utilized for executing easily in parallel:

Promise.all([
    functionAPromise,
    functionBPromise,
]).then((resultArray) => {
    const resultA = resultArray[0];
    const resultB = resultArray[1];
})

or execute multiple function and take the result of the quickest with Promise.race.

It’s easy to accidentally “disable” a Promise, for example forgetting to await the function when calling it but mostly forget to propagate or catch of the error, then the caller above will not get it or you just simply won’t log it.

// We intentionally don't want to await for the response
myFunction()
   .then((result) => {
       // doSomething with result
   })

// No error handler

Or if we handle from within the error but still return the promise

function otherFunction() {
   return myFunction()
           .catch(err => {
               // log the error 
               // this will return undefined and as a successfully result
           })
}

Nothing would show you or fail if you or someone else would do it like this accidentally somewhere.

At one moment before the async/await would arrive, people couldn’t wait and started using generators so simplify structure and of course there came a library to allow such called co

Generator based control flow goodness for nodejs and the browser, using promises, letting you write non-blocking code in a nice-ish way.

co(function* () {
  var result = yield Promise.resolve(true);
  return result;
}).then(function (value) {
  console.log(value);
}, function (err) {
  console.error(err.stack);
});

As you can see, the first function uses the yield keyword without any callback functions or nested then callbacks.

And again, everyone was eager to use this ASAP and because of it the project was filled with callbacks, promises and generators mixed. Because, in a large project it’s difficult to find time for full refactor, you refactor as you go.

Then finally we got async/await to make things easier, so we could write:

async function myFunction(id) {
    const user = await myApi.fetchUser(id);
    await myService.updateX(user.data);
    return user;
}

As you can see, it’s a huge improvement, but don’t forget, past is still there, so you will still encounter callbacks, promises and generators. Depending on what you are working it will be in higher or lower occurrence.

Async/await is generally much easier to implement on the language level than virtual threads, but the downside it has is the introduction of the Colored functions problem. Another issue is that, everything that’s async goes into the Node.js event queue and every new event that’s created from calling and async function gets put again at the beginning of the queue. In the end it can be really easy to cause some starvation of execution as something that’s fast but has a lot of async calls within will constantly get moved to the back of the queue. There’s no even distribution as you would get with standard or virtual threads, meaning you can’t achieve low latency.

Interesting showcase for the above, the following function:

async myFunction(params) {
    return await userService.getUserByParams(params);
}

Is slower than

async myFunction(params) {
    return userService.getUserByParams(params);
}

Notice no await keyword on the second example for userService method call, the previous one triggers another setback on the event loop at the start. Now, this can be (a bit) prevented with the build/lint configuration, but we’ll get to the configuration part later.

Everything that was mentioned so far for the concurrency was on the hello-world level of usage, if you would want to do some complicated things then you’re walking in the land of RxJs or Observables and have fun with that, because I never have, as it starts to hurt fast and the surface for error is huge and slippery.

Example, you have a web app that’s handling user requests, your endpoint is calling another service for some data. Let’s say you want to cache the data from that other service (and for simplicity ignore scaling and clones of our service), because it’s a single threaded app you can simply declare a single variable/instance where you get and set the value.

Just to make it easier to follow, let’s make a really simple example how it would look like

// Extremelly simple design, the service is a single instance in the entire app
class MyService {
    
    // Imagine we have TTL control and other things for expiration and other cases
    cache = undefined;
    
    async getData(params) {
        if(this.cache) {
            return this.cache;
        }
        // Imagine that the service is really slow in giving a response
        const result = await otherApi.getData(params);
        this.cache = result;
        
        return result;
    }
}

If 100 requests or more triggered this endpoint and this function, it would execute the API call all 100+ times, because they all accessed that function when the cache was empty and didn’t have a value. In Node.js you can’t block code execution! Think how would you solve this scenario that the other function calls would wait for the first function call to finish before they enter this function, it’s not easy at all.

With other languages that use threads or virtual threads these things are extremely easy, just use a mutex or a semaphore , and you probably have a dozen of implementations of those for complex cases like weighed semaphore, but for now lets just show the mutex example:

type MyService struct {
    mu       sync.Mutex
    cache    string
	otherApi *OtherApi
}

func (s *MyService) getData(params string) (err, string) {
	// Lock the further code execution until the mutex is unlocked
    s.mu.Lock()
	// Defer at the end of this function execution the unlocking of the mutex
    defer s.mu.Unlock()
    
	result, err := s.otherApi.getData(params)
	if err != nil {
		return err, ""
    }
	
	s.cache = result
    return result	
}

And with this, all calls of the method getData will wait for the lock of the first caller to finish before allowing others to continue. Only 1 execution per call is allowed.

There will be scenarios where you would need to have complex concurrency code, there you will start having difficulties achieving it in JavaScript.

Go from the start was designed for light and easy concurrency in mind, so the usage of goroutines and channels is superb for achieving any complex concurrency task with ease. With this, Go also has a notion of Context through execution of concurrent tasks where an early failure can cancel other concurrent tasks (short-circuit failure pattern). To make this clearer, imagine if you have a function that is being executed through an HTTP request, has some parallel calls inside of it and quite a bit of function chains with IO, if the request would get cancelled by the client, or some part of the code times out, it can send a signal through the context to finish earlier. With this you can stop fully function execution without need to make other I/O calls or executions and just fail early. In Node.js such a thing at the moment does not exist, if you made an HTTP request to execute A -> B -> C -> D and cancel the request at B, it will still execute C and D doing unnecessary work.

Java is evolving with that in mind by developing structured concurrency library for easier management of concurrent tasks of virtual threads, their lifecycle and offsprings.

Multithreading

Now that we are at the latest state of Node.js concurrency we should talk about it in general design. Node.js is powerful for I/O operations, but if you would do a lot of CPU intensive work you would start to notice it becoming slower with executions because in the end, it has a single thread, and the more work you do on the CPU you will get the problem of Blocking the event loop.

Node.js isn’t suited for building multithreaded applications, some devs will say you can, but the reality is that it’s more of a hack than a proper threading model. It would look like the following (from official Node.js site):

// threads.mjs
import { Worker, isMainThread,
  workerData, parentPort } from 'node:worker_threads';

if (isMainThread) {
  const data = 'some data';
  const worker = new Worker(import.meta.filename, { workerData: data });
  worker.on('message', msg => console.log('Reply from Thread:', msg));
} else {
  const source = workerData;
  parentPort.postMessage(btoa(source.toUpperCase()));
}

// run with `node threads.mjs`

It executes the same script multiple times as a separate process and uses process messaging to communicate in-between them. It has a flag to distinguish the main process/worker and that’s that. Working with this and doing any more complex things is gruesomely hard and dirty. In the end these workers still have the event loop and work in the same fashion. Just imaging creating a thread pool and using it this way, in Java there is the ExecutorService which simplifies the lifecycle and management of threads and in go you have goroutines (virtual threads) that simplify concurrency, while here you have this abomination.

Other languages like Go and Java scale vertically out of the box as they are designed for multithreading.

Future

/posts/javascript-backend-is-bad-for-enterprise/plan.webp

Let’s start with other languages first…

Modern languages like Go (released in 2012.) don’t require too much changes in the upcoming versions, especially when a language like Go is fully backwards compatible, but still receives big changes in terms of performance, security and features like out-of-the-box binary embedding, generics (and soon libraries for array operations and other useful like min/max), now 1.23 got iterators and many more that are highly useful.

Older languages like Java (released in 1996.) have started quite a long time ago, around ~2014. with the release of Java 8 (or might be sooner) to evolve the language for the future. They started major projects like Amber, Lambda, Loom, Panama and Valhalla in order to evolve or fix the language. For example, Loom was delivered in Java 21 and provides virtual threads to the platform, which is a huge deal and is much better approach than the async/await, which the engineers at Java said they could have implemented easily before, but wanted to go for a better alternative. Valhalla is under development and will take some years, but promises to bring value classes back, so that not everything in Java will be an object anymore, biggest impact is on array of objects (drastic reduction in memory usage and performance in general). There were quite a bit of GC introduction, lastly with ZGC where now the JVM is really a state-of-the-art VM. Remember, Java is trying to hold backwards compatibility as well and because of it is bringing giant number of community libraries and software with it like Cassandra DB, Kafka, ElasticSearch and so on.

You can see in the following video how a technical keynote looks like for the evolution of the language:

Now to get back to Node.js and JavaScript…

For Node, it’s a bit difficult to find good articles and announcements of features and future roadmap of the language. It’s really hard (or they just simply don’t exist) to find a technical keynote talk about the language, and it’s future. It seems the last technical keynotes have been in 2018. . But then again, developers don’t track these much at all, they just wait for a highly contrast rich influencer video to popup with a goofy thumbnail for them to check out the “cool new stuff”.

I focused the most on Node.js official announcements page. You can see in their articles that they just recently started doing announcements on the language changes, before that it was mostly foundation changes or conferences, but to get to the evolution and future plans.

In the previous Evolution section I mentioned big changes with concurrency, the other changes I noticed over the years from Node 8 for example were extremely minor. There is always a notion that it got a performance improvement on the V8 engine, but did it really improve much? Most benchmarks are done for some specific functionality and the question is, how much did your app use that functionality in order to see an impact? You need to benchmark your app now and before, but a lot of devs don’t care, they just send you a link and say how great performance their language is getting.

So, the features Node.js got (not counting fixes) were usually in terms of providing a promise API for their internal libraries like fs, lately ( in the last 4 Node versions I think) it got a new argument for the error class, so you can pass the argument of the cause error new Error('x failed', {cause: err}), getting internal test library, internal HTTP client fetch, HTTP 1.1 keepAlive is not the default, change in require syntax of core libraries from const fs = require('fs') to const fs = require('node:fs') and the likes of that. Exciting stuff, isn’t it.

Future plans for Node.js JavaScript improvements are extremely hard to find or are hidden until delivered, I remember seeing somewhere that there are plans for adding private class field support like the following:

class ClassWithPrivate {
   #privateField;
   #privateFieldWithInitializer = 42;

   #privateMethod() {
      // …
   }

   static #privateStaticField;
   static #privateStaticFieldWithInitializer = 42;

   static #privateStaticMethod() {
      // …
   }
}

Other than that, I don’t know (as usual there will be some performance improvements and bug fixes as usual).

And that’s that, that’s how the language is evolving and how the future looks like and these cases I showed are what I get as answer from people about the languages development and future, and they consider it good… Then you compare it with the Java JDK proposals and their description and discussion on why should a feature be added which goes into detail with pros and cons or the Go articles from their blog where they extensively showcase why they added such feature, how it works and what problem it solves.

Based on all of this, I really cannot fathom how people can defend Node.js/JavaScript language development roadmap.

Oh, but there’s actually something I forgot to mention that is now, of course, hyped…

Have you heard about Deno?

Or as how they put it on their official site:

“Deno is the open-source JavaScript runtime for the modern web. Built on web standards with zero-config TypeScript, unmatched security, and a complete built-in toolchain, Deno is the easiest, most productive way to JavaScript.”

It’s also based on the V8 JavaScript engine and the Rust programming language. Deno was co-created by Ryan Dahl, the creator of Node.js. Dahl mentioned that he had regrets about the initial design decisions with Node.js and went with the creation of Deno (first developed with Go, but switched to Rust) that will be able to re-use existing node packages (kinda). This again introduces separation into an already separated ecosystem of JavaScript where people again stormed on the untested beta product that this is to use directly on production and beta test there.

Bun - How about new runtime?

Relatively soon there was a beta release of a new runtime for JavaScript to replace Node.js called Bun and from its official website we get:

“Develop, test, run, and bundle JavaScript & TypeScript projects—all with Bun. Bun is an all-in-one JavaScript runtime & toolkit designed for speed, complete with a bundler, test runner, and Node.js-compatible package manager.”

Where, of course, it shows off it’s performance comparison against Deno and Node where it’s the best and of course these benchmarks aren’t biased and executed in isolation on really specific things. It’s a drop-in replacement for Node.js, so everything should work, right (until it doesn’t)?

What is interesting is that people opted out to use Bun while it was not LoveIt NEW | 1.0.0 in their production apps and got weird behavior and crashes. It’s unbelievable how much people rush to use latest things, even though they are obviously saying that they are not ready for production and then creating videos and articles just to put out some content, creating false picture and hyping up people to use it. Did I mention that the bar of skill has lowered and that we are full of youtube developers? Maybe this explains some technology choices.

JavaScript is a breeding swamp of new tools, runtimes, many different language transformations and ever-changing standards where there is always a shiny new untested thing in the ecosystem every couple of months for everyone to rush and try it directly in production and brag how they use the next bleeding edge in their CRUD app of 5 requests per second.

TypeScript - JavaScript supercharged! ⚡

/posts/javascript-backend-is-bad-for-enterprise/supercharged.webp

Now to talk about the elephant in the room, TypeScript. Where most devs will throw all the previous statements and issues in the water, because it’s TypeScript not JavaScript, no?

Now, before we proceed, I just want you to be aware of the following statement: “TypeScript (TS) transpiles to JavaScript (JS)!”

“TypeScript is a strongly typed programming language that builds on JavaScript, giving you better tooling at any scale.”

You can look at it also as a super-set to JavaScript, allowing you to specify the types directly in code, not like the jsdoc comments. It would look like the following:

function calculatePercentage(quantity: number, percent: number): number {
    return (quantity * percent) / 100;
}

calculatePercentage("John", 10); // This would yield an error with the TypeScript transpiler

Or another example from their website:

const user = {
  firstName: "Angela",
  lastName: "Davis",
  role: "Professor",
}
 
console.log(user.name)
// Would fail compilation with error:
// Property 'name' does not exist on type '{ firstName: string; lastName: string; role: string; }'.

You have interfaces to utilize as well:

interface User {
  id: number
  firstName: string
  lastName: string
  role: string
}
 
function updateUser(id: number, update: Partial<User>) {
  const user = getUser(id)
  const newUser = { ...user, ...update }
  saveUser(id, newUser)
}

Decorators (from Nest.js)

@Controller('cats')
export class CatsController {
    @Post()
    create(@Body() createCatDto: CreateCatDto) {
        return 'This action adds a new cat';
    }
}

and many more good stuff. TypeScript can run together with JavaScript, allowing you to gradually update the codebase from JavaScript to TypeScript.

Drastic improvement
TypeScript is a huge improvement in maintainability of the language and prevention of errors. Having any organization create anything more complex than a small script would require them to utilize TypeScript unless they want a never ending sleet of errors and bugs.

Using a library will now show you all the auto-complete properly and fail for invalid types. Even for the old libraries that were not written in TypeScript, you can create types so you get the transpiler to detect all possible cases and run checks. TypeScript might be the best thing that happened to JavaScript.

⚠️ Where the problems start… (or what it does not fix)

Did I mention that it transpiles into JavaScript? Yeah, in the end you’re still running JavaScript, TypeScript doesn’t make the language type strict, it helps a lot, it puts a transpiler that checks the code that’s type annotated for errors and outputs JavaScript.

Here are some things that can break/lie about types:

Outside input into your application

Unless you have correct validation and also know that the validation AND transformation work correctly you can get a completely different type than the one you declared. It’s so easy to make a mistake with this and sometimes the library might not work properly and generate a wrong type, but TS compiler would not know it.

Lazy developers

When working with types it can be quite an annoyance of constantly having to convert from one type to another or to create a different class from another, or a developer might just not know how to do something, so they will succumb to force the transpiler to abide with either using the keyword as:

const user = {
  firstName: "Angela",
  lastName: "Davis",
  role: "Professor",
} as User; // This will make the transpiler think that whatever is this variable, trust me, it's a User even if it's not

or to tell the TypeScript to ignore checks on the following line.

// @ts-ignore
console.log(user.name)

There will always be developers that succumb to these “hacks” and break the whole point of TypeScript!

TypeScript configuration

The transpiler has its own tsconfig.json file where it has quite a bit of configuration, one important flag that is turned off by default is the strictNullCheck. This flag checks if you in the code checked existence of nullable type and is THE most important flag to have as these errors are the most common. Every organization or person will have their own set of configurations for TypeScript, some libraries might not have these checks as well and this opens up a big surface for error without anyone knowing. Yes you can correct this in your project, but you will sometimes inherit a project without this set and once you set it, there will be a sea of errors in the transpiler and too much work to correct all.

TypeScript can’t always detect

There are some cases where TypeScript won’t be able to track that many changes in the types or some particular manipulation. There have been cases where you would need to pass a class, but a spread operator is called when passing down the data which effectively breaks the class as it creates an anonymous object with those values:

class User {
    name: string;
    age: number;
    
    isAgeGreaterThan(age: number): boolean {
        return this.age > age;
    }
}

function verifyUser(user: User): void {
    if(!user.isAgeGreaterThan(18)) {
        throw new Error('User is underage')
    }
}

verifyUser({...user, age: 20});

This will fail, as we are not passing the exact class, but an anonymous object with fields of a user. Newer versions of the TS transpiler or maybe some flag configuration might detect this, but on some cases (depending on all transformations) it might not (or the developer might as his usual add as User when passing the new object).

Other issues could happen from usage of Object.assign function as I had experience noticing that typescript is unable to track the changes it makes to the object it’s updating.

Decorators can also have a problem if done custom as they have the power to change objects and method behavior, but in a hidden way from transpiler.

Debugging TypeScript libraries

If you ever debugged or wanted to see what’s going on in TypeScript libraries you probably didn’t have a fun time, this is because when you clicked on a function/class from a library you would get shown only the function/method signature without any function body. This is because it generates JavaScript code and TypeScript types separate and that JavaScript code can be also mangled or difficult to read due to transpilation process and output type.

Example class type from Nest.js framework.

// rpc-params-factory.d.ts
export declare class RpcParamsFactory {
    exchangeKeyForValue(type: number, data: string | undefined, args: unknown[]): any;
}

JavaScript transpiled is usually right next to it.

// rpc-params-factory.js
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
exports.RpcParamsFactory = void 0;
const rpc_paramtype_enum_1 = require("../enums/rpc-paramtype.enum");
class RpcParamsFactory {
   exchangeKeyForValue(type, data, args) {
      if (!args) {
         return null;
      }
      switch (type) {
         case rpc_paramtype_enum_1.RpcParamtype.PAYLOAD:
            return data ? args[0]?.[data] : args[0];
         case rpc_paramtype_enum_1.RpcParamtype.CONTEXT:
            return args[1];
         case rpc_paramtype_enum_1.RpcParamtype.GRPC_CALL:
            return args[2];
         default:
            return null;
      }
   }
}
exports.RpcParamsFactory = RpcParamsFactory;

Now above is a simple and fairly clean output of JavaScript, but imagine a more complex function or differently outputted and constantly having to manually navigate to the original.

Libraries cannot be trusted

Just as with regular JavaScript, not even TypeScript libraries can be trusted. They can say that they return a certain value, but you might bet something completely else, or they might start getting problems when they support multiple types, but one of those types starts causing issues. I had case where TypeORM would say it accepted either string version of UUID or the class version of it, but it wouldn’t return any results with string. These problems are really hard to find unless you test them for real.

Some libraries like class validator or class transformer might for certain cases act a bit differently than you thought. For example, you can mess up the order of the two and validation might be invalid or the transformation, also the transformation might work differently, but you put TypeScript type on that field, so the TypeScript transpiler trusts you, but gets actually something different and your whole app is unaware.

These library issues cause severe problems and consumer a lot of time, as it’s hard to notice these issues at all and a lot of developers think that if a library returns x type, it will always return x type. Older libraries that only have types will have the most of these issues, since the code isn’t written in TypeScript, it’s just that it has types besides those exposed functions/classes/etc, and it’s “trusting” the code to work.

TypeScript summary

TypeScript should always be used if the application is not a small script instead of JavaScript and all the above problems should be constantly kept aware of, so that a bug doesn’t find its way in.

Other languages that are type strict have drastically fewer issues around types, and they pop up sooner and are far easier to detect. Also, because they are strict from the start of the language creation, all the libraries have correct exposure of what they receive and return and those languages (usually) are compiled to native code or like Java which is a hybrid to Java bytecode.

Build system

/posts/javascript-backend-is-bad-for-enterprise/build-system.webp

JavaScript as a language didn’t really have a proper stewardship and a known way of where it’s going nor how, because of that a lot of the tooling and build for the language was done by developers alone or other 3rd party like Microsoft with TypeScript. Because of that we have an every-changing pile of build or other tools where each project and organization has their own which is one giant mess. You don’t simply just start using a language, you need to set up ton of tools, linters, builders and so on just to have some base for you.

What this causes is that, a lot of time is wasted in discussion, choosing and learning all types of different tools, different configurations just in order to start and work in the language and which will also influence down the line how the project will evolve.

Long ago, there were quite a bit of build tools and task runners to help with the ongoing JavaScript evolution with an added benefit to provide some additional features. I’ve gone through Browserify, Grunt, Gulp, Webpack and TypeScript builders and task runners (probably have forgotten some). Plus, there are some on the horizon like the Esbuild

TypeScript is one way of building JavaScript apps, but TS doesn’t cover everything, so there are actions to make the code even safer and standardized.

One of the solutions is the typescript-eslint library which scans your code for errors or bad practices (defined by your configurator) and prettier code formatter.

Some might have heard about Bower package manager, today it’s mostly yarn and the official npm.

Now we have come to the point where the language has soo many tools and all of those tools have configurations that are unique for all organizations or specific persons and each will vouch for their being the only and best. I don’t know for you, but I experienced ton of wasted time on discussions for formatting and linting rules than on actually coding and problem-solving.

I just want to use the language with its official tools and just do the damn work! I don’t want to spend days adjusting and fixing setup and tools (because this is a never-ending story as someone will later come and start pushing their own subjective view)

Another star 🌟 in the scene is the Nx library for managing JavaScript mono-repos which in a way standardizes organization JavaScript projects, but also extremely tightly couples it and potentially causes problems that are hard to resolve later. The point of it is to, share code across all projects and re-deploy affected projects. Here the re-deployment of affected projects is the only benefit I see, but you can still share code with regular repos?

That’s the only benefit it brings, but locks you down on programming language if you have backend and front-end combined but then also complicates when a problem is encountered. Example problem I encountered was with migrations, because the migration executes migration js files dynamically and the Nx is bundling everything through webpack and overriding the requirement functionality, thus making it impossible to run the JavaScript migration files that don’t get bundled.

Having separate repose makes developers think about proper segregation of functionality, to make it standalone for real, and you just pull the latest version of library.

Developers, especially JavaScript ones love shiny ✨ and bleeding edge ♦️ things that will somehow bring the project to its glory and fix everything, where actually it’s the hard work and discipline that does that, but who has time for that, right?

Dependency ecosystem

/posts/javascript-backend-is-bad-for-enterprise/jenga-tower.webp

There is always a meme which shows the gravity pull of the black hole and next beside it far greater being the npm packages you have downloaded into your project 🕳️.

Why are npm packages so large?
From the start there was a notion of building micro-packages that re-use other micro-packages and libraries that did not want to copy-paste code, but instead use other packages lead to having a dependency which has dependencies that have their own dependencies, and that hole was a pretty deep rabbit hole in the end after you finish the installation.

Large pile of packages for a library can cause it to constantly have to update itself, because the more libraries you use the more maintenance you need to do, especially in terms of security. This becomes a never-ending annoyance and time consumption, especially when libraries tend often to break or introduce bugs. In regard to breaking I had situations when a library receiving a patch update would cause it to break functionality, so it was hidden for a bit in production as well until discovered. Note that this occurs often!

It’s highly advised to always fix your dependencies to exact versions and manually update versions! By default, npm installs with ^ symbol, meaning that the next install (like on CI/CD) can install newer minor and patch version if present.

But also, beware of the npm postinstall scripts of your packages. A 3rd party library can execute the postinstall script and run a malware on your system, especially since a lot of libraries depend on many others and so on, this impact can be huge, so a flag --ignore-scripts is required for proper security.

Library quality

I think that JavaScript is most famous for its ecosystem of libraries, from the library is-odd

Returns true if the given number is odd, and is an integer that does not exceed the JavaScript MAXIMUM_SAFE_INTEGER.
const isOdd = require('is-odd');
 
console.log(isOdd('1')); //=> true
console.log(isOdd('3')); //=> true
 
console.log(isOdd(0)); //=> false
console.log(isOdd(2)); //=> false

to the likes of lodash or underscore which provide mostly functions that already exist in the language, but with an interesting take that they might not always work as the docs say 😉.

As mentioned at the beginning, the micro-package idea created so many of libraries instead of just copy-paste of those 10-20 lines of code into your personal library. At one moment there was the left-pad incident was deleted from the npm registry caused thousands of software projects that utilized it to fail to build or install.

JavaScript devs have such a strong feeling of re-creating the wheel that we get so many libraries doing the same thing, but with lower quality.

In projects, devs love taking a chance to create something new that’s not present in the ecosystem (or kinda turn a blind eye) and make their own library x for either working with RabbitMQ, or adding some Kafka features.

Note: I’m not saying that the community should not create new things that are not present, what I’m referring is on projects that want to generate new functionality or fix existing and creating medium to complex libraries is not easy at all and for complex ones you should stop and think if the language you are using is the correct one, but I fear that the feeling of you creating your own version is too strong with them.

For example, I was on a project that had custom created offset handling for consumption of Kafka messages wrapped around kafka.js library, of course with no integration nor unit tests and with custom dependency injection logic through setters and getters. Another situation was on the legacy where there was a unique messaging service that was a wrapper on amqplib protocol with a lot of custom features for managing the propagation of events with a backing alternative task queue an in-memory queue, then there was a custom logger for building nested logs and some other strange functionality and so on. You can imagine how many problems maintaining these caused.

Some languages like Java have most of these things done and ready for you to use, so you can focus on the project at hand and generate business critical functionality, not play a library developer (unless you have a unique need).

Even if the library is new, it can have quite a bit of problems, but what staggered me the most was that even the basic functionality would not work as intended. Example is when we used in the team TypeORM with PostgreSQL and the queries it would generate for a simple delete by id and another field would not translate correctly through the driver and execute, the other situation we had was with building a bit more complex query would generate the SQL a bit differently than it should have.

Another library issue we had was related with either Mongoose or class transformer, was that it would always generate a unique UUID on the field that already had that value. We verified this through debugging on the library which had a part with soo many type checks, that it missed handling this specific object one and thus created a new which is random every single time, of course this was hidden for a bit until discovered.

And there were many more…

⛑️ Imagine now that you are dealing with prices, or hope to God you are not FinTech platform using JavaScript, the surface for error is massive! And if you make a mistake, it’s literally costly. In Java for example you would have more safety in terms of types, but also great support as the language has been out there for soo long that it has a huge base library and many bulletproof libraries which stood the test of time and got patched for all sorts of issues, like the BigDecimal class which has soo many methods and massive documentation for your usages. Would you rely on yourself writing such code to handle decimal places of prices or rely on a random 3rd party lib in JavaScript ecosystem or use a robust one used for a long time and in the core library?

There is a fantastic video about the Prisma ORM based on article Don’t use Prisma which I think colors much better the state of the library. Note that I had a team lead trying to push the Prisma ORM to replace our TypeORM or Mongoose and I still think that most developers pick tools because of how pretty is the site where it’s marketed without looking into the tool what it brings to the table in comparison. A lot of developers love living on the bleeding edge (more bleeding than anything).

Dependency summary
When a language by itself does not provide the safety or certain capabilities, when the tooling and standard is constantly changing without a clear guideline and too fast, the community favors new and shiny over robust and tries to re-use too much brings the whole ecosystem of libraries and dependencies into a volatile state, that it introduces a big risk to functionality of your application.

Summary

/posts/javascript-backend-is-bad-for-enterprise/desert-meditation.webp

Every language is worth trying and each has it’s place, for JavaScript it’s obvious that it will continue to dominate front-end domain for a long time, that it still will be prevalent in backend through SSR (server-side-rendering) frameworks and simple API backends and offer fantastic solutions like Cypress and k6, but for enterprise domain and anything requiring strict functionality on backend with long maintenance window and bulletproof libraries, I would advise to go with Go or Java.

Programmers shouldn’t tie themselves to one language, he should look at them as tools for achieving his goals! Understanding how compiled and interpreted languages work opens your way of thinking.

Big reason why I think most developers (not all of course) praise their language a lot without valid arguments is usually simple: they don’t want to learn a new language due to too much hassle or fear of not being able to understand it, thus producing arguments to justify their stay with the language.