Tag Archives: Code

When do you need useMemo (and when don’t you)

It’s confusing when we should use useMemo and when we shouldn’t.

useMemo isn’t free $

useMemo comes with a small cost. So we should understand when it’s a benefit, and when it doesn’t provide any value.

  • Small usage in RAM to memoize & remember the resulting value
  • It costs a small cpu cycle to loop over & compare the dependency array & run internal fn of useMemo

So we should be smart when we use it.

useMemo isn’t always needed

useMemo isn’t needed in certain cases. For example, casting something to Boolean, or doing simple math isn’t an expensive operation and returns a primitive like “boolean” or “number”. These value will always be the same every time they are re-render run 

true === true and 5 === 5.

Conversely, an array or object don’t have equality,

 [] !== [] and {} !== {} and new Date !== new Date.

Two tests of if you need useMemo

  1. Is the calculation to get the value is complex (looping, calculating, initializing a class)?
    • Or is it cheap (comparisons of numbers, boolean, casting types, simple math)
  2. Is the returning complex value (object, array, fn, or class)?
    • Or is it a primitive value that is simple to compare (number, boolean, string, null, undefined)

Examples

  const date = useMemo(() => Date.parse(isoDateString), [isoDateString]);

We should use useMemo

  1. 🛑 Initializes the Date class
  2. 🛑 Returns a Date Object, which is not a primitive

  const isAdmin = useMemo(() => runExpensiveSearch(accounts), [accounts]

We should use useMemo

  1. 🛑 Runs an expensive function to get the value
  2. ✅ Returns a primitive value

In cases where it’s neither an expensive calculation nor a complex object, useMemo isn’t necessary.


-  const isArchived = useMemo(() => Boolean(process?.deletedAt), [process?.deletedAt]);
+  const isArchived = Boolean(process?.deletedAt);

We don’t need useMemo

  1. ✅ Casts to a boolean which is a cheap operation
  2. ✅ Returns a primitive value

-  const numberOfAccounts = useMemo(() => accounts.length, [accounts]);
+  const numberOfAccounts = accounts.length;

We don’t need useMemo

  1. ✅ Getting the length property is cheap
  2. ✅ Returns a primitive value

Just remember the two tests

  • Is it complex / expensive function?
  • Is the value not a primitive?

What about useCallback?!

I’m so glad you asked! The principles are similar. In general, all callback functions should be memoized via useCallback. Functions are considered a non-primitive value thus never have equality unless we memoize them via useCallback.

(Cover Photo: Feldstraße Bunker, Hamburg, Germany – Jonathan Stassen / JStassen Photography)

Naming Booleans: Readablity with Affirmative Boolean

As a rule of thumb: Naming booleans in the affirmative

One of the most challenging aspects of software development is choosing the good names for variables. This is particularly true when it comes to Boolean variables.

“Naming things is hard.”

– Some Software Developer

Affirmative names are those that start with words like is, has, or can which clearly indicate that the variable is a Boolean.

Affermative Booleans Variables

Our goal is to be consistent in the naming & to keep it readable — almost like a sentence. Take a look at these examples, “Is logged in” or “has favorites” and “has no favorites”.

AffermativeNegative
isLoggedIn!isLoggedIn
isEmpty!isEmpty
hasFavorites!hasFavorites
canEdit!canEdit
Great Boolean Variable Names

Reading these like a sentence is natural. “Is logged in” or “can not edit”. There is only one nation flip in your mind you must do when reading the negative cases.

Negative Booleans Variables

Now, let’s consider what happens when we deviate from the Affirmative Approach and use negative names.

Affermative?Negative?
notLoggedIn!notLoggedIn
isNotEmpty!isNotEmpty
hasNoFavorites!hasNoFavorites
canNotEdit!canNotEdit
Confusing Boolean Variable Names

Our negative cases create a double negative! 😯

Try to read that negative statement as a sentence. “Does not have no favorites?” I think I know what that means, but that feels like an awkward way of saying “hasFavorites”.

The problem with negative named booleans is that they introduce the potential for double negatives. The Affirmative Booleans approach is more straightforward to mentally parse.

Conclusion

In general, naming Booleans in the affirmative is a practice that can significantly improve code understandability and maintainability.

Avoid no, and not, and other words that create the possible of a double negative when the boolean is flipped.

Naming things is hard, but naming boolean variables in the affirmative is a simple, yet effective way to help improve your code readability. Your future self and your teammates will thank you for it.

If you like thinking about naming, you may also enjoy thinking about pagination naming.

(Cover Photo: Factory in Sneek, Netherlands – Jonathan Stassen / JStassen Photography)

Migrating a codebase to enable strictNullChecks

Migrating a codebase to enable strictNullChecks can be tricky.

TypeScript’s strictNullChecks is a powerful compiler flag that enhances code safety by detecting potential null and undefined values at compile time.

There is some interesting discussions on migration, but to me none of them quite were satisfying:

I believe there is another incremental way

Let’s use this code as our example. With strictNullChecks : false it will not errors. With it strictNullChecks: true it will.

type User {
  email?: string
}

function getUserEmail(user: User): string {
  return user.email; // user.email might be null or undefined
}

Simple enough to fix. But in a large codebase we may have hundreds of these errors, and many will be much more complex. In my teams codebase, we had north of 500 errors and the count was unintentionally increasing.

Two Goals:

  • How might we incrementally resolve existing issue?
  • How might we prevent additional issues creeping in?

Enable strictNullChecks → Mark errors with @ts-expect-error → Setup eslint rule → Monitor with esplint

1. Enable strictNullChecks

Enable strictNullChecks is the natural first step in migrating. Adjust the compilerOptions flag for strictNullChecks in your tsconfig.json.

{
  "compilerOptions": {
    "strictNullChecks": true,
  }
}

By setting strictNullChecks to true, the TypeScript compiler will perform stricter checks on nullable values, reducing the possibility of null or undefined-related runtime errors.

2. Mark all existing errors with @ts-expect-error

There were likely be a large number of strictNullChecks exceptions in an existing codebase. Realistically, we probably can’t fix them all right away. We can use typescript’s @ts-expect-error comments before every instance of an error to temporarily suppress strictNullChecks errors per line.

function getUserEmail(user: User): string {
  // @ts-expect-error: 🐛 There is a null/undefined issue here which could cause bugs! Please fix me.
  return user.email;
}

This tells the typescript compiler that we’re aware of the error and currently expect it. We are marking them for further consideration during the migration process.

As an aside: @ts-expect-error is generally preferred over @ts-ignore. 
@ts-expect-error - Is temporary. Once the issue is fixed, typescript will remind us we can remove the @ts-expect-error. 
@ts-ignore - Is more permanent. suppresses the error and doesn't expect it to be fixed later.

At this point you could finish here!

However I recommend leveraging eslint to also keep us accountable.

3. Using eslint to highlight lines needing a refactor

While @ts-expect-error comments provide a temporary workaround, it’s important to gradually eliminate their usage to achieve the full benefits of strictNullChecks. Relying on @ts-expect-error extensively can undermine the benefits of type safety. We should flag these as not-ok in our code base. I would like to have a red or yellow squiggle marking them.

With eslint we can configured the @typescript-eslint/ban-ts-comment to warn on the usage of an @ts-comment. This further makes it clear in our editors that @ts-expect-error is temporary and should be fixed.

Example .eslintrc.json:

{
  "overrides": [
    {
      "files": ["*.ts", "*.tsx"],
      "rules": {
        "@typescript-eslint/ban-ts-comment": [
          "warn", {
            "ts-expect-error": true,
            "ts-ignore": true,
            "ts-nocheck": true,
            "ts-check": true
          }
        ]
      }
    }
  ]
}

4. Using eslint to discourage new issues

To take the enforcement of code quality a step further, we can introduce esplint—a tool that specializes in tracking and managing eslint warnings counts and enforcing that the count should only decrease. By leveraging esplint, we can keep a count of @ts-expect-error occurrences in our codebase. This count also serves as a metric to gauge progress during the migration. The goal is to steadily reduce the count, indicating a decreasing reliance on @ts-expect-error comments – thus an increase of strictNullChecks and an overall improvement in code quality.

Migrating a codebase to enable strictNullChecks

From here the codebase is ready to be slowly refactored. We encourage our team as they are working on a stories that touche code near one of these error, to take the time to refactor and cleanup the null checking.

This refactoring might involve implementing better error handling mechanisms, like using TypeScript’s union types, optional chaining (?.), or nullish coalescing operator (??).

Conclusion

Migrating a codebase to enable strictNullChecks can significantly improve code quality and enhance overall code quality. I believe by following the this pattern is a pragmatic and straightforward approach to enabling strictNullChecks. With diligent effort, we can all embrace strictNullChecks and enjoy the benefits of reduced runtime errors and write more code with confidence.

(Cover photo: White Sands National Park, New Mexico – Jonathan Stassen / JStassen Photography)

An Introduction to Redux

Redux’s ideology is a unidirectional data flow. This pattern reduces long term complexity, encourages re-usability, and generally discourages spaghetti code. 🍝

Video Series: https://egghead.io/courses/getting-started-with-redux

Terms & Descriptions

The core of everything is the Store, Action, and Dispatch. In its simplest form it’s all you technically need. From there Thunks & Sagas enhance the the tooling around the Dispatching of Actions.

Store

  • A singleton object/state for the app.

Action

Dispatch

  • You “Dispatch an Action to the Store” to update the store.
  • Dispatch sends the Action payload through the Reducer.

Reducer

  • Receives the Action payload from Dispatch and modifies the Store.
  • Reducers contain no business logic, they only modify the Store described in the Action

Thunk

  • Considered the old way, but sometimes still has great applications in simple cases.
  • You can “Dispatch a Thunk to either access state or do ajax work”
  • Within a thunk you can call additional Dispatch
  • Within a thunk you can access the state of the store.
    • Good for conditionally firing subsequent api calls, or dispatches.
    • Good for pulling together data from the store into a dispatch.
  • Good for very simple ajax calls, you can Dispatch Actions from ajax response
  • Best way to understand Thunks in my opinion is to look at the 10 lines of source code:

Saga

  • Regarded as a better replacement for Thunks
  • Can require more effort than Thunks to understand, and build.
  • Within a Saga you can access the state of the store.
    • Great for conditionally firing subsequent api calls, or dispatches.
    • Great for pulling together data from the store into a dispatch.
  • Sagas can subscribe to listen and fire when some Actions have been Dispatched
  • Great for moving side effects to be self contained instead of sprinkled throughout the app
  • Provides great debounce / cancel / takelatest logic nearly for free.
  • Can do long running / scheduled Actions
  • https://redux-saga.js.org/docs/introduction/BeginnerTutorial.html

(cover photo credit: Jonathan Stassen / JStassen Photography)

MIgrating API servers

Join me for a story about my journey migrating my personal API server to a new host. Never has there been such thriller as this adventure. There are learnings, distraction, stories, and of course a bitter sweet goodbye.

What do I host with my API server?

It’s a Node.JS express server.

Code is a joy of mine. In my spare time I tend spin up side projects and build experiments. Many of these ideas depend on storing & responding with data, thus I wrote a lightweight NodeJS express server to act as my generic multi-tenant api to server all apps I may rapidly prototype.

Some notable projects include:

Why Migrate?

  • Better Specs!
  • Upgrade OS!
  • Avoid fixing broken things!

1) I use DigitalOcean as my host. The server cost $5/mo. I created the box 4yrs ago. For new boxes DigitalOcean now offers more RAM & Hard Drive space at the $5 tier.

What happened to jstassen-02? RIP

2 ) Ubuntu upgrade form 16.04 to 18.04. I could just upgrade the system, but value in starting fresh right?

3) Ok ok, dpkg is also very broken on jstassen-01, something with the python 3.5 package being in a very bad inconsistent state. I would really like install docker and start running mysql or mongo db. I started to fix dpkg to get this working, but that went down a rabbit hole.

Given these 3 nudges, it just made sense to swap to a new box, install node and call it a day.

I got distracted at LiteSpeed!

I used DigitialOcean’s pre-configured Node boxes to get me started instead of going from a bare Ubuntu box this time. They have some nice ssh security & software firewalls prebuilt. Wait, but what are these other options?

Ooo what’s this? OpenLiteSpeed NodeJS? Never heard of it, let’s try it out!

OpenLiteSpeed is a Reverse Proxy akin in the same vain as Apache & Nginx. Hm should I be using a Reverse Proxy with my node server? Ok I’m swayed, let’s try it, can’t hurt.

After much confusion and configuration (C&C) I had things running pretty well. It required some modifications to the node app. The benefit of running the Reverse Proxy, the box can now listen on port 445 (https) and based off the domain name route to separate apps in the future. Do I need this? Not really, but don’t mind the option.

OpenLiteSpeed Listener configuration & Virtual host mapping page. This will be handy.

Then the code changes started

OpenLiteSpeed integrates & uses Let’s Encrypt out of the box. Previously I had the Node app handling serving up the certs. It’s rather nice to have the Node app be responsible for less. This brings the dev and production app closer together in parody.

The Database files are document based dbs stored to disk in flat files. It was nice to better organize where these were located on my file system. The migration to a docker based mysql or mongo db is a separate project. (whew avoided one distraction)

A new home

Next was updating the url to a more generic one. I previously used https://jstassen-01.jstassen.com. I could simply point that url at the new box. But that’s kinda ugly jstasssen-01 points to a server named jstassen-03 right? Hm. What about create a url like https://api.jstassen.com then it won’t matter what box it points to in the future.

Fun fact, API calls from an app won’t follow 301 redirects. So, redirecting jstassen-01.jstassen.com -> api.jstassen.com won’t work, especially with POST requests. Well Bummer.

No worries I can update all my apps to use to a new url. No big deal! … oh right that’s 11 different projects. Hm.

Tracking my progress. Emojis always help.

Half were very easy, however I wrote my first google chrome extension and Alexa skill back 4 years ago. I last updated & published them about 1 year ago. A lot of security changes have been updated for how these apps are now built. They have refined their permissions, coding patterns, and apis.Previously grandfathered in as legacy for both, but to redeploy, I needed to upgrade them. Sure, that can’t take long.

Next I noticed OAuth apps were failing. Cookies were failing to be set entirely. Kinda critical piece to remembering which user is authenticated! Interestingly Express.js by default won’t trust credentials forward to it through a reverse proxy (like LiteSpeed). Just needed to allow 1 layer of proxied requests. app.set('trust proxy', 1). Well that one liner took an evening to figure out, haha.

You mean I have to rebuild?

To use use the newest packages, I needed refactor both the google chrome extension and Alexa skills. 1 week later, whew it was complete! On the upside, all my dependencies are fresh and up to date. I now have a modern promise based ajax library and promise based db read / writes. Fancy.

I swear all I wanted to do was copy the code over to a new server and start it up. I didn’t bargain for these code improvements and refactoring.

Performance Testing

Is it faster? I ran 1,000 requests 10 concurrent against a heavy GET endpoint. The new box is on par and just marginally faster ( maybe 1%?) but it’s insignificant difference. Reasure none the less.

RIP jstassen-02, you were taken from us much too soon.

jstassen-02 (RIP) was a failed experiment running Plesk server. It was heavy, a RAM hog and just not optimized. Not to mention Plesk limits your vhosts. Api calls sometimes took twice as long compared to jstassen-01.

Backing up

It’s time to say farewell and delete jstassen-01. I’m not scared at all, why would I be? And yet I hesitate.

I found this youtube video with a reasonable way to create an archive of the Ubuntu box I can hold on to just in case. Can I restore from it? Hard to say. But at least the data will be there in the off chance I missed something.

# Backup
sudo tar -cvpzf jstassen-01.tar.gz --exclude=/backup/jstassen-01.tar.gz --one-file-system

# Restore
sudo tar -xvpzf /path/to/jstassen-01.tar.gz -C /directory/to/restore/to --numeric-owner

A small 3.24GB archive. I could make the archive smaller by clearing npm/yarn cache folders and clear temp files. But it’s close enough, not too bad.

Maybe next I’ll experiment with creating a new Droplet for a day (a droplet for a day probably cost something like $0.17) and try a restore. Would be interesting to understand this style of backup.

This probably was a bit overkill since I’ve also started to create 1 off private repos on GitHub and use them as a backups as well. So I a committed version my whole home directory too.

Saying “Goodbye” is never easy

Now to remove jstassen-01…

I’ll say a little homely when I click destroy…

End of line

Code Reviews on Closed Pull Requests

Good Code Review (CR) on Pull Request (PR) are at the heart creating high quality code. Code reviews on closed pull requests can feel awkward.

I believe we should welcome feedback on all PRs — even closed ones.

A coworker of mine came back from a short vacation and read over my PRs I had opened and merged while they were out. They spotted a bug in one of my PRs and they left a comment pointing it out. They however felt deeply guilty commenting on a closed PR and apologized to me. This is odd to me.

Merged PRs often get treated as code where the “Ship that has sailed”. Treating merged code as “untouchable” forgoes the ownership and responsibility we developers have for the code we create.

I suggest a cultural shift: To strive to produce high quality code, we ought to welcome and be open to Code Review feedback, before, during, and after the life of a Pull Request.

How we allocate time to address the feedback will range from team to team but could be new backlogged stories, re-opening a new PR, or just answering questions.


How do you feel about code reviews on closed pull requests?
Is there a better way to handle code feedback after it’s merged?


Using Immutable.js .update() instead of .get() then .set()

Often you need to deeply update an complex Map or List. Using Immutable.js .update() is a powerful way to this kind of update.

Given this state

const = myState = Immutable.Map({
'123': Immutable.Map({
id: 123,
b: false,
}),
'456': Immutable.Map({
id: 456,
b: true,
}),
});

Don’t use .get() then .set()

const myUpdatedItem = myState.get('123').set('b', true);
const myNewState = myState.set('123', myUpdatedItem);

Using .get() then .set() seems simple and clean, but it’s slow, it fully replaced the the current Map. While the results are the same, ImmutableJS cannot optimize against it and cannot make some faster assumptions.

Using Immutable.js .update()

const myNewState = myState.update('123', (myMap) => {
return myMap.set('b', true);
});

It is much better to using Immutable.js .update(). Immutable can keep more references and make assumptions that allow it to optimize the update. For more, check the .update() docs.

You can also use .updateIn() for deeply nested data. Read my post on Immutable get() vs .getIn() to see how it works.

Pagination naming conventions

I’ve been working with some some APIs that have some different patterns for pagination. Which got me to thinking about naming an pagination conventions.

The Problem with ‘up’ and ‘down’ naming

Namely, when describing pagination actions I think it’s probably best to avoid ‘up’ and ‘down’ for naming. These are descriptive words for the ui, not so much of the data.

A -> Z

That is ‘up’ could potentially mean different things in the data depending one what you’re presenting. Let’s say you sort you a list A -> Z by default.  What does down look like in both of these situations. ‘down’ in our minds may imply ‘down’ the alphabet towards the end. But if we can sort to Z -> A the idea of down becomes ambiguous and a bit confusing. For instance if we’re sitting on page M and say down which way does that mean?

Newsted -> Oldest

Let’s consider something that has more chronology. Perhaps a list of recent transactions sorted by date. We could safely presume as you scroll down you retrieve older and older transactions. Describing the each page as down seems reasonable, we’re paging down in time. Specifically the unix epoch timestamp is a number that literally goes down as you go back in time. 1508480500 -> 1508480000.  Makes sense.

Now let’s flip our transaction order to be Oldest first. Now we go 1508480000 -> 1508480500 when paginating down.  Paging down make the number go up. That can get confusing.

Scroll down -> Ticker

To emphasis the opinion of up and down being a UI concern and description. Let’s try one last scenario.

Let’s say we a have a simple twitter feed where we render out messages. Let’s say it’s your typical feed that scrolls down along the page. Paging down make sense here.

But let’s say we introduce stock ticker-styled feed, well now down doesn’t make as much sense. right means down.

Punchline: next and previous

Paginate via a term that is more generic like next and previous seems much cleaner regardless of the UI rendering style.

  • They don’t describe the UI directly.
  • They don’t per-say describe the direction of sorting.
  • Describe the data intent of getting the next chunk.
  • They are ambiguous enough that next don’t imply direction but rather intent.

Pagination is hard

Pagination naming is hard, but good design is amazing. These are just some of my observation and thoughts after working with different apps and APIs. Who knows, maybe there is something better!

React “stateProps PrecalculationError” and TypeError ‘state’ of undefined

Debugging a React component can be a pain.

There are many causes for “statePropsPrecalculationError”, for debugging see my post on Debugging React statePropsPrecalculationError post.

When you specifically have ” ‘state’ of undefined” the fix is simple:

Recently I ran into “statePropsPrecalculationError” alongside with the the console warning TypeError: Cannot read property ‘state’ of undefined(…)`.

This is super easy to fix, but not immediately obvious. Whenever you use `this.setState()` inside your component, you must define your initial state!

Just add a `getInitialState()` definition!


const MyComponent = React.createClass({
getInitialState() {
return {
isOpen: false
};
},
toggleOpen() {
this.setState({isOpen: !this.state.isOpen});
},
render() {
return (

Hello World!

);
}
});