Lessons Learned from My First Full Stack Application03/08/2017
All of the things I've been learning for the past two years were finally brought together to build out my first full stack application for Free Code Camp, a polling app. It's deployed! You can view the source on GitHub.
I took my time on this project because while I was comfortable with React, Express, and MongoDB, I'd never used them together to build a large, modern CRUD application. Piling on to that, I had no experience at all with Redux or the Flux pattern in general, much less how to pass state around in a larger React/Redux application. This was also my first experience visualizing data with D3.js and dealing with user authentication.
I've learned more from this project than anything else I've ever built. This seems like a good time to take a step back to reflect on and share some of the most useful lessons I've learned from this project.
This is a very long post, so feel free to jump around.
- Thank You Virtual Mentors
- An Overview of Vote
- Linting with Standard JS (Use with caution)
- Why Standard (or Semistandard)?
- Using Standard Without React
- Using Standard With React
- Webpack 2
- Optimizing Your Bundle(s)
- Linting with Webpack
- The Client
- The Server
- The Database: MongoDB
Thank You Virtual Mentors
First off, I'd like to thank Brian Holt for both of his mind-bending Complete Intro To React workshops on Front End Masters. The workflow he shares is a simple and scalable way to think about building a React application. As you build it up, you bring in new tools only as you need them, and refactor fast and often. I can't recommend his workshops enough, even if you've been using React for years. The insights he shares are well worth the Front End Masters subscription. Check out all of Kyle Simpson's workshops while you're at it!
I'd also like to thank Rem Zolotykh for putting together such a thorough Youtube series about building a React/Redux application with jwt authentication. While you should never use a custom authentication strategy for real-world production apps (use Passport instead), he offered a detailed overview of how to build an authentication flow using JSON web tokens, localStorage, and the proper headers. It was also very helpful to get another perspective of how to wire up React and Redux, and handle forms with controlled components.
Finally, I owe a heavy amount of gratitude to Robert M. Pirsig. While working on this project, I hit a lot of brick walls. To unwind after getting put in my place by a waterfall of error messages, I would read Zen and the Art of Motorcycle Maintenance. After picking at the first half for about a year, I finished the second half within a few weeks while working on this project. The explorations of Quality in the book correlate directly with programming, and with life in general. It put a lot of mental blocks I had with thinking about code into a much broader perspective, and inspired a deeper sense of kinship with the craft. It changed the way I perceive and approach bugs. I've gained an appreciation for them, as frustrating and ego-crushing as they can be. Each hard bug illuminates a gap in knowledge with an opportunity to learn something profound, and you grow because of it. As Ryan Holiday's cult-classic is titled, "The Obstacle is the Way." In turn, this leads to writing better quality software naturally and becoming better prepared and energized to contribute fresh perspectives and innovations to open source projects, companies you're working for, or your even an entire industry by finding your own blue ocean with your own open source project or company. Quality wins out in the end. Customers know it when they find it. A bug may seem trivial at first, but if you consider that an entire application may rely on that trivial bug being fixed in order to run the way it needs to, it's not so trivial after all, and deserves attention and careful thought.
There seem to be a lot of nods to Stoicism in Zen and the Art of Motorcycle Maintenance, and I'm looking forward to to re-reading it after I learn a bit more about the Stoics. "The Obstacle Is the Way" at on the top of my list.
An Overview of Vote
The user stories for this Free Code Camp project are here.
Vote is a server-side rendered single page application that allows you to create and manage polls as an authenticated user, vote once on any poll whether you're authenticated or not, view poll results instantly after voting, and share a single poll with friends and strangers.
It uses React and Redux for the client side application, and Express and MongoDB for the back end API and database. For each initial request,
ReactDOMServer renders the entire React application's markup into a string, which gets injected into
index.html before being sent to the client. Authentication is handled with JSON web tokens.
This project is far from perfect. You can't search or sort polls, but this app satisfies all of the user stories for Free Code Camp, and that was the intent. Vote won't be found on "Show HN", but the hard lessons drilled into me have
Let's dig in!
If you're going to build a good table, you'd better know what tools to use when, why, and how. Otherwise your food will slide into your lap.
Linting with Standard JS (Use with caution)
I was skeptical of Standard JS at first, but I gave it a chance at the recommendation of Brian Holt in his workshop and really enjoyed the simplicity.
That said, the common gotchas of omitting semi-colons are well known and Standard has lint rules to catch those mistakes, assuming that all of your code is always linted. Continuous integration tools like TravisCI can run a final lint check for you before allowing the code to be deployed to staging. If any lint errors are present, the build will fail and it won't get deployed. I think Standard is fine for personal projects and smaller companies, as long as linting is strictly enforced.
For safety, Semistandard (Standard plus semicolons) is available, and you'll still get all of the benefits of a simple, effective style guide that you don't need to spend time tweaking. This is good for having the peace of mind of knowing the risk of ASI bugs is off the table.
Why Standard (or Semistandard)?
Eslint is a powerful tool, but I'd spent more time than I'd like to admit tweaking linting rules and wrangling
.eslintrc files .
Standard JS takes all of that choice out of the equation and enforces a simple, reliable style guide that CAN'T be changed. If you change it with custom lint rules, than you're not coding in Standard Style.
Using Standard Without React
Standard itself is very easy to install and use. The docs are straightforward.
Running Standard alone in the terminal works, but the default lint error output is hard to read in the terminal.
Instead of using Standard directly, you can install a package called
snazzy, which will give you nicely formatted results with colors.
If you have a new project, here's what you can do:
If you already have standard installed globally, install
standardas a dev dependency to work with snazzy in your project without errors:
yarn add -D standard snazzy. The smarties behind Yarn discourage using global dependencies in most cases, so keep them local whenever possible so your projects can stay portable between machines. NPM works just fine too if you're not using yarn.
Add a lint script to
If you're using
npm, mute the annoying
err!messages by adding
exit 0to your script:
"lint": "snazzy; exit 0
node_modules) and return your lint errors, or nothing if everything passes.
Using Standard With React
Using Standard with React will require an eslint config file, but it's still very easy.
If you have
eslintinstalled globally, add
eslintto the above command so that your config points to the local copy of
eslintto avoid errors.
Create a file called
.eslintrc in your root directory and add this:
eslint command with your source code directory to your lint script:
snazzy, you need to specify where eslint should look for your
.js files. In the script above, eslint will check a directory called
src. You can add multiple directories separated by spaces.
What if you use extensions like
eslint has a flag for that:
The CLI has many more options depending on your needs. The docs are very helpful.
Finally, run it:
You can also delegate linting to
eslint-loader, so can see lint errors every time a new bundle is compiled. More on that in a bit.
I've turned off linting in Sublime Text because I've found it more distracting then helpful. Plus, it creates a latency while saving. If you don't like linting in a terminal, eslint plugins are available for the major text editors.
Very recently, Webpack 2 was finally released and it's awesome!
They've made the migration process almost painless. Webpack 2 will validate your
webpack.config.js for you, so you don't need
webpack-validator anymore. If you mess something up, Webpack will give you a helpful error message outlining what doesn't match their API, and how you can fix it.
The documentation is very well thought out and organized so I encourage you to check it out if you've never used Webpack before, or haven't upgraded from version 1.
While webpack seems to perform a basic task (files in -> bundle(s) out), its configuration can get complex depending on what you need it to do. The documentation for version 2 is thankfully much more clear and simplified than version 1, so go through the concepts and guides on webpack.js.org if you don't know where to start. It will get you up and running in no time.
Optimizing Your Bundle(s)
As your project grows, so will your bundle. It will probably get huge. The development version of Vote had a bundle that was over 3 MB before optimizing. For production, it now has a
vendor.js bundle for larger dependencies, and a
bundle.js bundle for the app itself. Each bundle weighs in under 200 kb minfied and gzipped. They're still heave, but much better than before. The main bundle can get broken up even further with code spitting so the client will only request of the pieces of the app it needs for the current view, instead of the entire app at once.
Before configuring the
vendor.js bundle for Vote, I first had to figure out which dependencies were bulking up the
bundle.js file the most.
Analyzing a Bundle with
One of the coolest plugins I've come across for Webpack is the webpack-bundle-analyzer. It gives you a highly-interactive visualization of your bundle(s) to show you the elephants in the room.
It's as easy as installing:
and including it with your plugins in your Webpack config:
Now, anytime you build a new bundle,
BundleAnalyzerPlugin will spin up a server, and open your browser to a visualization of your bundle(s) and dependencies scaled to size. The plugin can take a config object as a parameter, but the defaults should be just fine for most cases. The config options are in the docs. To "turn it off," you can simply comment out the plugin.
For Vote, I used this tool to determine which vendor dependencies should be extracted into a
A lot of quick wins can be gained by splitting up your bundle. To start, putting your 3rd party libraries in a
vendor.js bundle will not only reduce the file size of the two, but you'll also be able to take advantage of caching 3rd party dependencies that likely won't change nearly as often as your application's code. Your users would only need to download your bulky dependencies once, and they'll be ready to go almost immediately on future visits. Boom! Just be sure to include a chunkHash with each bundle so that browsers won't serve old cached bundles after you deploy a new build.
The implementations for splitting up bundles can get deep very quickly, so split them up based on how you expect your app to grow, and split more deeply as you need to.
Some great advice on how to shrink down certain dependencies, and deal with multiple copies of dependencies with different versions can be found in this great post.
After splitting out your 3rd party dependencies, you can also take advantage of code splitting for your application code as well so your users only need to download the necessary code for each "page" of your single page app, instead of the whole thing at once.
For simplicity, I decided to add just the largest libraries to Vote's
Those dependencies alone made the main
bundle.js much smaller, without an overly-large
If you have a lot more dependencies and modules in a large application, you can spit up your bundles in many different ways, but in this case for Vote, I'm not going to spend the time code splitting further since this app is not getting any larger.
Optimizing Bundles for Production
Making Webpack 2 builds for production can be as easy as running
UglifyJsPlugin, set all of your loaders' minimize option to
true, and set the Node environment variable to 'production', allowing your code's production optimizations can take effect. This is important for React as it has a lot of production optimizations.
A quick way to cleanly create a production config file for webpack is to use
webpack-merge. This allows you to combine a separate config file with your main
webpack.config.js file to limit code duplication and keep your config files clean and easy to extend. To keep things simple, I decided to simply add a
webpack.prod.js file for production builds, and use my
webpack.config.js file alone for development builds.
Then, instead of exporting your config objects directly, you'll need to return the config objects as functions. Easy.
The production config function will actually use
webpack-merge to include
webpack.config.js's config properties as
commonConfig() in its argument and receives
The first three plugins are mostly identical to what
webpack includes for you by default when passed the
-p flag, besides some extra options in
UglifyJsPlugin here, but now I was able to add
CompressionPlugin to gzip the
.js bundles spit out by Webpack. We'll get into how gzipped files can be served from Express, but *Spoiler Alert* you can use
express-static-gzip instead of
express-static-gzip is a wrapper over
express.static that lets you serve static gzip files from a directory.
There are likely better ways to gzip files automatically from
niginx, but my
nginxfu is lacking on that front.
Now, you can add an npm script to build your production bundle:
There is a problem with this if you use Babel's
es2015 preset, but its easy to fix.
modules to false in your
es2015 preset will convert your your ES6 modules to CommonJS modules by default, so just turn it off and your ES6 modules will be respected.
Linting with Webpack
Remember Standard? You can use
eslint-loader to lint your code before Webpack bundles everything. I did this based on Brian Holt's recommendation in Complete Intro to React.
Then, add this to your config:
enforce: 'pre. That will allow your code to be linted before webpack's build step.
Keep in mind that
eslint-loader will only see the application code that gets bundled, so your server code will still need to be linted separately. If you're using Standard and not Semistandard, this is doubly important to protect yourself from ASI related errors. Again, TravisCI can add a final lint check to your build before deploying.
Vote is a React application that leans heavily on Redux to share state across many components.
The skeleton of the application is based on Brian Holt's Complete Intro To React workshops, though I did a heavy refactor of the Redux logic as the features got more complex. I ended up using a pattern similar to Ducks.
I've been a big fan of React because its very stripped down, and funnels you into making good decisions most of the time. It encourages functional programming concepts and has great patterns for handling immutable state and data flow.
Is React good for everything?
Of course not. There is no such thing as a one-size-fits-all framework. React work awesomely well for a lot of situations and teams, apps big and small, but if you have a large team of developers working independently on the same app, a more structured framework like Angular 2 or Ember may better serve a team's needs depending on the company culture. React is very powerful while staying out of your way, but if a team of 10 or 15 developers are working on the same application independently, many different styles of solving the same common problems can make things harder to maintain and tie together, especially as an app grows. Even if React is a lot more fun, easier to work with, and faster to develop with, a framework with more structure and "magic" can potentially be more maintainable over the long term for big projects, even if it is a pain to wrestle with at time. Like a lot of things in computer science, "it depends." I've heard it said that when choosing a framework to use, it's good to pick the one with the least potential of being the worst possible choice down the line.
React.createClass vs ES6 classes
I recently used this style for the first time while refactoring my Free Code Camp Wikipedia search project, Spiffy Wikipedia, with
create-react-app (it's deployed here). At first it felt annoying to have to bind each method with the constructor's contextual
this, but if you think about it, this a win from a performance standpoint since React doesn't need to use any "magic" to bind
this for you like in
You can even use a
react replacement like
inferno which strip away
createClass logic entirely (plus a lot of other things). The stripped-down half-siblings of React are mostly compatible with React's APIs, and they're better optimized for mobile devices. The developers behind Inferno have good reasons for prioritizing mobile.
Another performance gain can be achieved by using stateless functional components as much as you can until you need to use state, methods, or React lifecycle methods.
For Vote, I used the
React.createClass style for stateful components because that's what I'd been used to, and how Brian Holt taught Complete Intro To React.
Practically speaking, I don't really have a preference of one stateful component style over the other at this point in time, but I do like how I don't have to worry about forgetting a coma when using ES6 classes. Manually binding methods isn't a big deal. Looking at the broader community, ES6 classes undeniably the determined direction that most developers have committed to. It's probably best not to fight the tide, and ensure that your open source code is as readable as possible for the majority of React developers who are used to ES6 classes. Empathy is a virtue. That said, I doubt that Facebook would deprecate
React.createClass in the foreseeable future since so many codebases rely on it.
The true value of React's PropTypes really shine while building larger apps.
Take this example from Vote's Create A Poll page component:
There is a lot going on here, but just take a look at all of the
propTypes checking the props coming in from Redux.
PropTypes allow you to type check your props to ensure your components receive the types they're expecting. This can catch a lot of bugs early. Using
isRequired will throw a warning if an essential prop doesn't show up.
Beyond type checking, the
propTypes property is great for documenting all of the props that your component is expecting.
If you're using ES6 class syntax for React, you can include your
propTypes as a
static property. Here's an example.
Otherwise, you can tack on
propTypes to your Component
Note the casing difference in
The official React docs have a great guide about how to think about your component structure.
Digression: One great handy use for arrow functions in React's synthetic DOM event attributes is when you want to pass an argument to an
onClickevent. The native
onclickhandler only accepts a function reference with no parameters.
onclickwill pass along the click event object to the function its calling as that function's parameter. You can get around this easily in React:
This way, I was able to use an arrow function inside of
this.deleteOption(index) as the arrow function's implicit return value.
index will be the index value from a
map function encapsulating this link.
Decisions about how to break up your UI into components can get subjective, but it makes sense to use the Single Responsibly Principle as mentioned in the docs. In practice, this can get difficult as applications grow, but simple code doesn't mean easy code.
Spiffy Wikipedia Example
For a simple example, let's put Vote aside for a moment and look at my other recently-refactored React project, Spiffy Wikipedia. This project simply allows you to search Wikipedia for a topic using its API, and the see some aesthetically pleasing results.
App's responsibility as a container component is to handle search requests from
SearchBar, and pass the results down to
SearchResults after performing an AJAX call to Wikipedia's API.
As Dan Abromov puts it, container components are "concerned with how things work."
Notice that this component only displays other components. The main concern of
App is with handling data and passing it down to its children.
Header is a very simple stateless functional component. It shows the header. Notice the
Header.css import. It may seem trivial to make a separate component for such a simple piece of UI, but now, anytime I might want to change the header's look, I know exactly where to go. Everything about the header's presentation, from markup to styling, is encapsulated right here inside this component. It has one responsibility.
Time to Refactor
There is a code smell here.
SearchBar is currently a presentational component with one responsibility, which is to present the search bar. It's doing that single responsibility, but with quite a bit of markup to put a form together, as well as a link to a random Wikipedia article.
Do you keep everything encapsulated here, or do you break this up with components for "SearchForm", "Button" inside "SearchForm", and "ChanceLink"? This is where things get subjective. If you're not careful, you could end up making a component for every synthetic DOM element. On the other hand, it's good for readability and maintainability to keep components as simple as possible.
One easy compromise would be to make a
SearchForm component to be responsible for the form itself, and leave the chance link in
SearchBar since its only one simple link element:
Now that render function is much cleaner. I included the rest of the code here to illustrate how SearchBar's methods and state can be passed down as props.
As events get triggered in the
SearchForm component, SearchForm itself doesn't alter data or handle events. SearchForm simply passes events up to SearchBar, either an input value change, or a submit event. If SearchBar gets an input change event from
this.handleSearchInputChange, called from SearchForm, SearchBar updates its state, and passes that new state down to SearchForm's
searchText prop as the new data. SearchForm can than update the form with the new data it receives. When SearchForm's form is submitted, it can call SearchBar's
this.handleSubmit method, which SearchForm received as a
handleSubmit prop from SearchBar. SearchBar will handle the submit by passing the form data up to App using App's
handleSearchSubmit method, which App passed down to SearchBar as a prop called
onSearchSubmit. SearchBar doesn't care about the details of how to searches are actually submitted. That's
The important takeaway is that data always moves in one direction. This is the essence of one-way data-binding. If anything breaks, its easy to trace where something went wrong because data only moves in a single direction when an event happens. Typing something into the form doesn't update the DOM form directly like you'd expect natively. Instead, your keystroke is triggering a change event, which eventually culminates with React rendering the form's text from its own state. This happens so fast, it feels native. This behavior happens in what's called a controlled component. React is handling the form state, instead of allowing the native DOM to handle form state in the browser. The process of debugging errors in this sort of flow is a lot easier than in other paradigms where data can be passed in both directions. Two-way data binding makes it hard to determine the source of bugs if you don't know where the data is coming from when a bug occurs. In large, complex apps, this can be a nightmare. Thankfully, Angular has moved passed that in later versions, and many other frameworks have embraced one-way data-binding.
React Dev Tools even allow you to watch state updates in real time!
Escalating to Redux
Now, what if I decided to add a dropdown component of look-ahead search results SearchForm?
It would make sense to include the dropdown's UI as a separate component, but what if it needs to call
App.js every 400 ms to load results?
Let's reference the updated component tree:
We don't need to flesh out the new components in code to realize we'd have a problem. To pull this off, we'd need to pass
loadWikiData method down as a prop through
SearchForm, and possibly even to
LookAheadDropdown depending on whether
SearchForm gets refactored to include
handleSubmit instead of SearchBar to further clarify each components' single responsibility. But that's not all! We'd need to pass
App down the component tree to at least
SearchForm as well.
This is an example of the data tunneling problem. As the components get nested deeper, so do the layers of components that data and events need to pass through. This is when Redux's predictable complexity becomes more desirable than the unpredictable complexity of data tunneling which can grow over time.
In essence, Redux allows you to store and manage all of your React state outside of your React components.
It's uses a variation of the Flux architecture from Facebook. Flux is a pattern, not an implementation, that describes a "store", or multiple "stores", which are objects that hold on to your immutable state, "actions" which trigger state changes, and "dispatchers" which trigger actions based on UI events, or other events. When a store is updated, the view layer (usually React) receives the new state as props, and updates its UI with the changes. Flux has a lot of other pieces to it which I won't dig into here since you'll likely be using a Flux implementation library like Redux. Flux is complicated because it solves a much more complicated problem. Facebook's UI code is very complex, with many hundreds (thousands?) of deeply nested components, many needing to share props and state. Just imagine building Facebook UIs with React alone.
There are many libraries implementing the Flux pattern, but Redux has been the most widely used lately and for good reason. Instead of allowing multiple stores, Redux uses a single store for all of your state. Any time an action is dispatched, a reducer function takes in the action plus the current state, and returns a new state object with the changes, not a mutated form of the state object which was passed in as a parameter. This allows for powerful features like time-travel debugging in Redux Developer Tools, which does exactly what you'd expect. Since reducers create an entirely new state object each time, it's possible to track those unique objects over time, and pass your application back and forth between states with Redux Dev Tools. This single direction of data flow into the single store object makes debugging very easy since its easy to trace which reducers update state in order over time, and to see how each update affects the state object.
For big apps, Redux allows you to decouple your components so they don't need to worry about the flow of state, state-altering methods, or events being passed through the component tree. It all flows in and out of Redux, a la carte style.
When Do You Need Redux?
Not all apps need Redux, and you shouldn't use it if you're not running into the problems it's meant to solve. There are trade-offs.
On one hand, Redux allows you share state between components easily, and keep components decoupled and portable.
On the other hand, Redux adds complexity and weight to your app, making it more tedious to add features and maintain. It also increases your app's surface area for bugs.
At what point do you need Redux?
Many people don't mind data-tunneling for small to mid-size apps, but its probably a good idea to bring in Redux once you find yourself with state and props that need to be passed at least two layers deep in multiple parts of your application. As your app grows, those complexities will start to snowball, and refactoring will become more and more difficult.
If, on the other hand, you have an app with one component a few layers deep that needs to tunnel some state, then you'll probably be fine just handling the data-tunneling in that instance. The added complexity of Redux would outweigh the complexity of maintaining one data-tunneling occurrence without Redux.
For Vote, adding Redux was a no-brainer. Many components need access to the
user object in state, which alone would have been a nightmare to pass between components. A higher-order component could have been an option, but there are a number of other bits of state and methods, like
flashMessage's state-altering methods, which are used in multiple components in different branches of the component tree. Using higher-order components for everything would get hairy fast.
The added complexity of Redux didn't hold a candle to the added complexity without it.
A Ducks Pattern Variant
At first, my Redux code lived in a single
store.js file. It didn't take long for the file to get hairy.
After some research, I read about the Ducks Pattern which looked very promising.
I ended up with a variation of that pattern which worked out well:
Store.js file initializes the store with middleware and the rootReducer. Thanks again to Brian Holt for teaching that handy devToolsExtension one-liner. Before that, I used
remote-dev-tools to spin up a separate server to run the Redux Dev Tools to prevent server-side errors.
rootReducer.js combines all of the reducer slices from modules into one root reducer.
One important note is that the name of the exported reducer slice inside each module determines the name of the module's individual state object in the store.
Let's look at the
There is a lot going in this file, but it progresses logically. For a
newPoll, the actions, action creators, reducer functions, and the rootReducerSlice are all right here, and not in separate files. Adding a new feature is as easy as adding the relevant code to each section, instead of bouncing back and forth between files. It's easier to reason about as well (for me at least).
One way to make this cleaner would be to group the Action Types, Actions, and Reducers together for each feature, and maybe even put each feature into separate files to be imported into this current one for the Root Reducer Slice to use.
Here's how grouping just the action creators and reducers together can look:
The file is still pretty long, but this refactor is already much easier to reason about. Keeping Each action creator and its corresponding reducer together eliminate the need to scroll back and forth while changing things, or adding a new action/reducer flow.
Out of the box, Redux only supports a strictly synchronous data flow.
Thankfully, middleware make it easy to dispatch functions and promises, instead of synchronous action creators alone. This allows you to perform operations after a action is dispatched, but before the reducer is called.
redux-thunk for example, will hijack a dispatched action when a function is returned and tell Redux "Hey, I've got this. Do other things, and I'll get back to you." Once an async operation, like commonly a promise, gets resolved or rejected, you can dispatch any actions you'd like depending on result of the promise, and Redux will handle those resulting action dispatches. The final actions must return normal action objects with at least a
You can uses a thunk for dispatching actions conditionally as well. A thunk function is simply a wrapper function used to delay the evaluation inside of it.
You can dispatch more than one action in a
.catch callback after a promise is resolved or rejected, but the final actions dispatched must return normal action objects, each with at least a
type property. The module's root reducer will use the
action.type property of the dispatched action in its switch statement to determine which reducer to run (if any).
Handling async actions in Redux is simple using thunks.
axios call from the example above:
After getting a response from the API, a certain set of action creators will get dispatched if the response is successful and the promise is resolved, and other action creators will get dispatched if the response is an error and the promise gets rejected.
Writing a thunk action involves simply returning a function taking in
dispatch as its parameter, and making your async request (a side effect) within the function.
Action creators in Redux are meant to be pure functions by default, but thunks allow you to make them "impure" and handle side effects effectively.
Hampsters.js](https://github.com/austinksmith/Hamsters.js) for that awesomeness. It's best not to go that route until you absolutely need to.
A lot of wins are gained with unit tests. They force you to think about common edge cases in your business logic and how to handle them, they act as documentation for how your logic works in detail, and they encourage simple, decoupled code. If your code is hard to test, that's a code smell. As you develop your app, you're probably going to break things you've already worked on as you add more features. Well thought out unit tests will let you know right away when something breaks so you don't end up with a nasty surprise later on. Just having a test runner touch a good majority of your code frequently is a good sanity check, and it builds confidence in your codebase as it grows even if a lot of people are working on it at the same time.
Testing React with Jest and Enzyme
Jest is well suited for React. You can write snapshot tests of your components, which render components into markup as JSON using whatever props you'd like. The test fails if the markup changes, and you'll see a diff of what changed similar to a Git diff. You can update the snapshot by running
jest --updateSnapshot or just
jest -u. Snapshots are far from bulletproof, but they're cheap and disposable, and will ensure that your markup is rendering as expected.
For simplicity, let's go back to
[SpiffyWikipedia](https://github.com/itxchy/FCC-spiffy-wikipedia/blob/master/src/components/SearchResults/SearchResults.test.js) for a snapshot example:
We'll get to
enzyme in a moment.
shallow renders the component,
shallowToJson creates a JSON tree from
shallow's render, and you just need to
expect the tree
expect comes with Jest for free.
If no snapshot file exists, Jest will create one for you. A directory will appear called
__snapshots__ in same directory as your test file.
Beyond snapshots, Enzyme from AirBnB is essential for testing your components more deeply. It works great with Jest, but it's also commonly used with Mocha.
Shallow only renders a component's surface-level markup, and none of its child components it may have. Shallow doesn't render a full DOM, so you won't have access to DOM API's or the component's lifecycle methods. The trade-off is that Shallow is fast. It's great for testing if a component's markup is showing up properly with more detail than snapshots. Shallow should be used as much as possible before using
Here are a few
shallow tests from
These tests expect the correct markup and components to appear depending on what prop values are passed into
<SearchResults />. Shallow has a lot of methods you can use to target markup inside your component, and the fine folks at AirBnB offer some great documentation.
SearchBar was a bit more complicated to test. Some DOM events needed to be simulated, so
mount was used instead of
Mount renders a full DOM, so you're now able to interact with DOM API's and component lifecycle methods.
In the tests above, form change events and button click events are simulated to test how the component behaves.
Testing Redux with Jest
Testing Redux is as simple as calling a reducer with a default state object and an action containing dummy data, and verifying the returned new state object against what you're expecting. Let's look at few examples from Vote.
Here are a few action creators alongside their reducers in createNewPoll.js:
And here are their tests in createNewPoll.spec.js:
In each test,
createNewPoll.js's root reducer slice is returning a new state object based on the previous or default state, and an action with test data. If the new state object returned from the reducer is what you expect, the test passes. If not, you know exactly where to look for the bug.
A root reducer slice is simply the module's exported reducer that gets passed to combineReducer in the rootReducer.js file. It's a bit confusing to think about at first. Put another way, the root reducer is Redux's main reducer that reduces everything into its one state object. The root reducer is comprised of a number of root reducer slices, each being an individual modules main reducer. A module's main reducer (or root reducer slice) will call one of the many individual reducers in that module if an action's type gets matched. That reducer will return a new state, and it will be passed back to combineReducers into Redux's main root reducer, and Redux's store will have a brand new state object.
A lot more edge cases need to be considered for these tests like wrong types out-of-range values, super long strings, etc. being passed as parameters to action creators, but this is a good start. In production apps with real clients, thousands of users will use and abuse your forms every day. Applications need to be hardened against bad data. This is important for security too, but more on that later.
webpack makes it easy to modularize your CSS and import individual CSS modules into your React components.
create-react-app encourages this by default, and I really enjoyed this workflow in my Spiffy Wikipeda app. Maintaining CSS styles is getting easier all the time.
PostCSS offers many powerful plugins as well, which will work with your favorite CSS preprocessor.
For vote, I'll admit I didn't focus as much attention on styling as I should have. For simplicity, I elected to keep my SASS and CSS considerations separate from my React code. I used
ExtractTextPlugin to bundle all of the compiled CSS into a separate css file, and import the
main.scss directly into
BrowserEntry.js so Webpack would know about it.
All of my SCSS files were stored in a separate
sass directory, the idea being to keep styling concerns and React component concerns separate. But isn't the styling of a component part of a component's concerns? Depends on who you ask I guess, but importing your SASS or CSS modules directly into the components that need them makes a lot of sense. Instead of going through a separate directory tree looking for the style module that's responsible for your component, and managing the file structure of a separate style directory, it makes sense to keep your CSS or SASS together with the components they're styling. This is good for a faster workflow, and it makes it easier to use CSS conventions like BEM. Instead of mirroring your CSS directory with your component directory, you can keep your components even more self contained in their file structure.
How to bundle your CSS is another consideration. Using
Vote uses Express to handle page requests, as well as API requests for CRUD operations with MongoDB.
Server Side Rendering
Vote's React markup is rendered on the server into a string using
The code itself is very simple. Thanks again to Brain Holt for teaching this pattern in the first Complete Intro To React!
Note: that this code works for React Router 2. The documentation for server-side rendering doesn't mention any changes of this pattern from v2 to v3 , but you've been warned.
React Router 4 is looking amazing and you should probably just make the jump if you're starting a new project. The final version will be released very soon, and the documentation got a complete revamp. You can use
matchfor server-side rendering if you're using version 4. Brian Holt explains how to tie them together in his second Complete Intro to React course on Front End Masters. It's worth it! React Router has had a lot of API churn over the past few years, but it the twitter whispers are reliable, version 4 will be the last major API redesign for the foreseeable future. I'm glad Ryan Florence and Michael Jackson elected to take the heat from disgruntled users while working tirelessly to build a more finalized router API that will be much better for the long term. The breaking changes were well worth it to get to this point.
The code above seems pretty busy at first glance, but let's break this down.
When a request is received by Express, the
match function from
react-router is called. It takes two parameters, the first being an object where it can learn about your application's
routes (returned from a function in this case), and a
location (the request's URL) to match to the routes. It can also take in a
history property if you'd like.
One aside: in your
index.html file, make sure you include a forward slash in your bundle script locations or you might get basename errors when reloading nested routes, or navigating to nested routes directly:
match's second argument is a callback with three parameters,
The meat of that callback simply handles an
error if it is defined, a
redirectLocation if it is defined,
renderProps if it is defined (this is how you know there is a successful match between the requested
location and a known route), and finally falls back on a 404 error if none of the callback's parameters are defined. If there is no error, no renderProps, and no redirect, then there is no match.
The actual rendering happens thanks to
ReactDOMServer.renderToString. Since this is happening on the server, using JSX would add necessary complexity since JSX needs to be transpiled. Instead, using
React.createElement for the two elements we need works out fine without too much nesting.
JSX is syntax to make it easier to compose components. Components are just functions, so we can easily write our components without JSX.
React.createElement takes in three parameters:
In JSX, this would translate to:
Those two elements will render into the entire page's markup.
Pre-Transpiling For Node Using Babel
One obstacle to rendering a React application on the server is that Node doesn't understand JSX or ES6 module syntax, not to mention any other ES2016-ES2017+ syntax we might have leaned on Babel to transpile in the application code.
There are a few options to transpile the application code for the server.
Brian Holt teaches with
babel-register, which can be required into your server code.
I ended up adding an option to ignore
node_modulesbecause I think there was a
.babelrcfile somewhere in a 3rd party module that was overriding my own.
This works great for development, but you don't want to use this in production.
For server-side code, it's better to transpile ahead of time, instead of
babel Transpiling the same code over and over again on the server.
To pull this off, I made a
production directory, and set up some
npm scripts to transpile a copy of all of the application code that the server needs to render, and then put that process into the production build step.
Now, when running
build:prod, all of my application files will be transpiled, and the
sass folder will be copied to the production directory too so that the application doesn't panic (gross, I know). I'm a a new convert for keeping styles with their components in the same directory to begin with in React apps.
That's great, but there are a few more steps.
It would be a drag to have to transpile everything every time a file changes during development, so it would be nice to still be able to use
babel-register outside of a 'production' environment.
Finally, the correct application files need to be required based on the environment:
Now it's closer to production ready.
When the app is in a production environment,
babel-register won't be run, and React will render the pre-transpiled version of the app. During development,
babel-register will transpile on the fly, and ReactDOMServer can point to our actual application code without a lengthy transpilation step before-hand.
Express API Routes
Vote's API was meant to be RESTful.
RESTful Dialog Between Machines
REST is more of a paradigm than a standard that describes an interface between a client and a server that is uniform, stateless, and explicit. Resources are passed between computers (usually in JSON, but XML is still prevalent) in a predictable way, so that the two distinct applications don't have to care about what languages, libraries, or even hardware each are using. REST is method of communication with its own customs and dialect.
REST stands for REpresentational State Transfer.
Making a polling application was a great exercise for creating a server API since a number of simple Create Read Update Delete operations would be necessary.
This application uses Express Router, so picture these endpoints as prepended with
Anytime a request reaches the server, it first passes through Express' middleware, which includes express router. Here's a heavily redacted
One mistake I made while building this API was that I didn't design a blueprint first.
It's a good idea to have all of your endpoints and data schemas mapped out beforehand so you don't end up having to change it often as you build out features.
In the example above from
polls.js, those endpoints could be crudely mapped out this way:
As I developed this application, its easy to see that I built the first two endpoint before the last two. It makes sense to think of endpoints as if you're accessing a directory structure. In this case, the username endpoint isn't very descriptive.
/api/polls/username/:username would be more clear.
The DELETE endpoint could have just pointed to
/api/polls/id/:id as well, but what if someone (possibly me) accidentally set a DELETE request to that URI instead of a GET request to that same URI and it somehow made it to production? Someone could lose priceless data from a groundbreaking poll.
API design is a deep subject and there are a lot different opinions about them, but REST offers some great guidelines to follow.
Avoiding Callback Hell With Async/Await
Promises work very well by themselves, but I've really enjoyed working with
await syntax. It allows you to program with asynchronous functions while treating them as if they're synchronous. Generators and Promises are very effective, but I enjoy the simplicity and readability of
await. It's really cool that so many great alternatives to callbacks have entered the wild in just a few short years!
Not So Fast...
Unfortunately, Node 6 doesn't support
await, but Node 7 does and Node 8 is scheduled for release in April 2017! By the time you read this, async/await will probably be available natively in Node.
For now, I used a library called
asyncawait to be able to use async/await as functions so no transpiling would be necessary.
Keep in mind that
await are reserved keywords in the spec, and NOT functions like they're shown here. This 2ality article goes deep into async/await in addition to other asynchronous syntax's coming soon.
asyncawait library behaves just like the real thing, so it's fine until I can use async await natively.
Let's look at an older commit when I was still using
babel-register on the server for all environments:
Could a promise have been used here? Of course, but this project seemed like a good time to give
await a try, and it's very simple to use.
Inside of an async function, you set up
catch blocks. Inside the
try block, you can use
await to essentially tell the function "hey, I'm pausing this context until I hear back from this promise I'm waiting for." In this code, the expressions following the
await call won't evaluate until a promise from
updatePollDocumentOnEdit gets resolved or rejected. Once the
updatePollDocumentOnEdit promise resolves,
updatedPoll will receive its new value, and the rest of the function will continue. All of this happens while Node does other things outside of the
async function's context. If a promise gets rejected, you can handle the error in the
This was a simple example with only one
await statement, but what if you had to perform three or four async function calls? Instead of nested callbacks, or even nested promises, you can use
await to tell the
async function to pause until
await gets a resolution of a promise. This way you can have multiple variables with
await calls that will run in order, each pausing until its following promise gets resolved:
Many more use cases can be read about here.
The Database: MongoDB
I tried out Posgresql with Vote at first, but switched back to MongoDB mostly because I'm more familiar with Mongo and Mongoose, but also because the data of this app works very well for noSQL objects. Translating the poll schema into SQL tables would have gotten messy very quickly, and I'm not great at SQL scripting. I'm sure there's an elegant way to do it that I'm not aware of, but Postgresql isn't going anywhere. The other key factor was being able to use a hosted MongoDB database on mLab for free.
Going hand-in-hand with designing an API, another key takeaway I took from Vote was thinking about Schema design.
The Schema for polls went through a few iterations but I ended up with this:
This could probably be simplified, but having the separate schemas made the structure flat and clean to look at. Each level would be easy to extend as well
Mongoose allows you to chain commands and use promises to simply MongoDB logic.
This code searches the database for an email or username matching the identifier passed into the login form. The promise gets resolved with a user object (either populated with a found user, or empty), otherwise, the error is passed to the
It's important to handle rejected promises quickly so that you can debug them in the right context. Without the
catch statement, an error here would be more difficult to trace.
Development Database vs Production Database
During development, you can spin up a database on your system if MongoDB is installed locally, and connect to it with mongoose.
You'll need to create a
data folder in your project's root directory and another npm script:
You should see a bunch of logs in your terminal if it's working.
Then, you just connect mongoose to
For production, you should use an environment variable for your mLab URI, since it's a bad idea to put security credentials directly into your code, especially if you're committing to GitHub publicly.
You can use a conditional statement to connect the proper database based on your node environment:
Promises make life easier.
It's never been easier to deploy to the cloud. You can spin up a VPS instance in a few minutes at Digital Ocean, Linode, and others, or use a platform like Heroku or Amazon EC2 to handle all of the devops for you.
I think it's valuable to go through the process of configuring your own VPS server at least once, as its a great learning experience to get you thinking about Linux, permissions, security, configuring a server that's not Node, caching, and even some rudimentary performance optimizations that you can keep in mind if you build an app that ends up getting real users in the future. Nginx is very powerful, and can be used to stand in front of your node application to serve cached static content much more efficiently.
If you're serving customers, that's when it's better to use a cloud platform like AWS that has a team of seasoned professionals working full time on maintaining the servers as well as security concerns. Plus, it will make scaling out your app quickly a lot easier when your Hat Review app makes it to the front page of Reddit, crushing your server with requests. You'll pay a premium for the extra pre-configured features, but situations like spikes in traffic (the good kind), DDoS attacks (the bad kind), or even intrusive hacking attempts are commonplace on the modern web. If you're serving customers, you have a responsibly to ensure consistent uptime, and protect customer data. Security needs to be a top concern at the development stage, but more on that later.
Cloud platforms like AWS aren't invincible as a lot people found out recently, but it's important to remember that no matter how reliable a cloud service seems, they are all still just computers being maintained by humans. Mistakes will happen, guaranteed. There will be plenty more outages at all sorts of companies in the future, whether from mistakes in a terminal command, a natural disaster, or just too many redundancies failing at the same time. Anticipating disasters, and even simulating them, will help you come up with an established playbook for what to do when the probable eventually happens. Performing "fire drills" regularly is a good practice for companies because when disasters do happen, you'll be following a familiar plan step-by-step, and not reacting to events while in a heightened emotional state. With all of that preparation, however, things will still happen that are beyond your control. When S3 went down, it was probably cheaper for a lot of companies to just accept a few hours of downtime and lost revenue rather than reach for their "solar storm" contingency, which could have taken many more hours to conduct properly than the time of the outage itself. It probably helped that a large chunk of the Internet was in the same boat.
Continuous integration services will build your project in the cloud, and run all of your tests and linting to determine whether your app's build is "passing" or "failing". This ensures that your app can actually install in a variety of environments, and will help you catch resolve dependencies you may have missed in a project if you used a module that was installed globally on your system. This also keeps you from automatically deploying a build with failing tests. With teams of developers, this adds an automated safety check beyond running unit test suites manually.
For Vote, I used Circle CI since it's very easy to use, but I'd recommend Travis CI since it offers a lot more features for free, and their mission is to forever support open source software. Travis CI is heavily integrated with GitHub, and allows maintainers to manage builds from pull requests easily. The docs are very good too.
If you're using a cloud platform like Heroku, you can deploy to a staging server on every successful build. Travis CI has a guide for that.
Staging vs Production
It's a good idea to have your automatic deployments first go to a staging server, and then deploy to production manually. The manual production deployment step can be as easy as a single click. This ensures that if a serious mistake passes Travis CI's build step and testing, you can still catch it before passing it on to the production server.
A feature like Heroku pipelines is useful for managing multiple deployments that share the same codebase, and you can deploy code from a staging server to a production server easily and safely knowing that it's already functioning in actual server's environment.
For Vote, I use a free Heroku instance for continuous deployment, and I deploy manually to my own production VPS server after that.
If you have an app that gets steady traffic every day, how do you take your app's temperature? It seems to be running fine, your unit tests are all passing, and the app hasn't crashed since your last deployment. That's all great, but how do you know that your users aren't hitting some strange edge cases that are causing errors?
Well, you could look through your application's logs from
forever's log files, but they can get very long. More importantly, how do you view only the logged errors, and logged actions leading to them, that you're most interested in for debugging, separate from other errors? What about fatal errors that you even want paged (or emailed) to you as soon as they happen?
A logger called Bunyan allows you to configure JSON logs that can include as many details as you'd like. Those JSON logs can be pretty-streamed to the console and to an external log file which an external process like
node-log-watcher can monitor for you, and send you an email anytime something blows up. You can also stream bunyan logs programmatically, so you can make yourself an IoT wifi siren that will go off anytime Bunyan streams a POST request with a fatal error. Or not, but getting emails about errors you want to know about is very helpful.
What to log can turn into a rabbit hole, but its helpful to log information about the successful operations that affect users, and log detailed errors when things go wrong. Hopefully, you'll be able to see what operations happened leading up to the error to help narrow down what went wrong without having to inject a bunch of
To set up bunyan:
With this config, levels that are at least "info" will get streamed to
process.stout, and levels that are at least "warn" will get logged into a file called
vote.log in a
log directory at the project's root. This file can be pretty-printed by running
bunyan vote.log inside the
Here's how some helpful logging can work in practice:
Note the use of
log.error. Both logs will be printed to the console, but the error will also get written to the log file, which can later be searched using
There are a lot of reasons not build your own authentication strategy. Good security is a constantly moving target, and nothing is immune from hackers that want to break in badly enough. Even if you manage to build a reasonably secure authentication strategy that's penetration tested and following current best practices, how often are you going to audit it for new vulnerabilities, known and unknown? Once a week? Once a month? Never? Before you know it, your users' JWT's start getting stolen.
Security is very hard to get right, and its best left to professionals who deal with it full time.
Passport has over 300 authentication strategies that are widely used. They're free, open source, and easier to implement that building a leaky authentication strategy yourself.
helmet is mandatory.
Finally, use HTTPS if you aren't already. It's very easy to set up with certbot.
Getting a comfortable understanding of prototypal inheritance, lexical scope, closure, this, CommonJS/ES6 modules, higher-order functions, and async functions make everything else easier. You can pick up on new frameworks faster because you start to recognize the design patterns they're based on, and pick up on the conventions that many libraries follow. Think about all the libraries that using chaining functions in their APIs similar to jQuery, or open source projects that use constructor functions to build their library APIs. You begin to see the same coding patterns used over and over again, and once you understand them, you can start working on other people's code a lot more effectively.
All it takes to get comfortable is to just put in the time. A lot of it. Talent (if that even exists) may give some people a head start for a time, but the people who are willing to put in the work consistently and stay curious reach new peaks every day. That can be anyone. If you want to be a "good developer", it starts with deciding to be a good developer, defining what that means, and doing what it takes to get there. The idea of what a "good developer" is will change over time. It boils down to building things with quality alongside people who wish to do the same, and not trying to learn everything. There are more new technologies coming out every month than you could ever hope learn. It's important to be okay with the unknowns, and hopefully find some exhilaration in the potential to discover new things every day. You can learn a ton from other developers, and you likely already have a lot to teach.
My favorite learning resource has been the You Don't Know JS series. Kyle Simpson goes very deep into the guts of the language in a very clear, concise way. I've jumped around the books and had to re-read a lot of the chapters over the past couple of years, but the concepts are crystallizing more every time I revisit them.
And again, I have to recommend Zen and the Art of Motorcycle Maintenance one more time. It will change your life.