sudoscientist:~# apt install golang nodejs
Building sudoscientist
With the languages for my project picked out, I began working on a simple RESTful backend in golang. Here, there were choices for which router I would use, and the decision came down to either the gorilla/mux
or go-chi/chi
. While I could have used just used net/http
, I wanted something simple to organize my routes and not have to worry about starting entirely from scratch. A routing frameworks was something that I had no intention of writing from scratch, nor did I want to start going down the path of writing a simple routing utility only to have to replace it later. The ability to plug in middleware to handle authentication and other common roadblocks also made using a routing framework attractive. The decision to use go-chi/chi
was mostly down to how easy it was to read the source code and grasp what was happening. The simplicity of the tools allowed me to easily understand what I was doing, and hack on whatever else I needed along the way. With this, and the decision to use React/Redux already in place, I started building.
Golang, and what makes a website
The proverb measured twice, cut once is a useful philosophy when building websites, or applications in general. The foresight of knowing what needs to get done makes it much easier to actually build it. This project was made easier because of the iterations I went through when using a traditional model, view, and controller framework. I knew that I needed separate data stores for my various components, such as users, profiles, and posts. I also knew I had to implement some type of authentication and authorization, possibly using JavaScript Web Tokens (JTWs) to pass the data between the clients and servers. I knew if I could implement this process using cURL, it would be fairly easy to transition to a proper JavaScript frontend. To store data I would use PostgreSQL, since that was the database I was most comfortable with, and it has plugins for some of the future projects I want to work on.
The first step was implementing a simple database connection. After reading some best practices for handling having multiple modules, connecting to the same database, I opted for using dependency injection to pass the database details around to my various sub-components.
func main() {
// initiate the database
db, _ := database.NewDB()
defer db.Close()
auth.DB = db
auth.Init()
users.DB = db
users.Init()
blog.DB = db
blog.Init()
This allowed me to have one database connection, and the ability to pass the reference of that connection to my sub-components. It would also make it a matter of adding two lines to main.go for each new component that I needed accessing the database. Additionally, until I set up migrations, the Init()
calls would allow me to initialize my tables.
The next step was building out a bare bones user system. This would be split into two separate but interrelated packages. The first would be the auth
package, which would be designated to handle authentication and passwords. The second would be the users
package, which would contain the user’s profile and other associated information. This would allow for me to decouple the profiles of users from the user themselves.
The primary information needed in the authentication package would be a user’s name, email address, and password. The creation of this was a simple SQL command.
func Init() {
DB.Exec("CREATE TABLE IF NOT EXISTS users (username text primary key, email text, password text, admin boolean);")
}
Creating a struct
for passing this data around was also straightforward.
type SignUpCredentials struct {
Username string `json:"username", db:"username"`
Email string `json:"email", db:"email"`
Password string `json:"password", db:"password"`
}
As was getting the data into code.
creds := &SignUpCredentials{}
err := json.NewDecoder(r.Body).Decode(creds)
This allowed me to push the data to the backend service via json data curl -d @file.json
which would add the file.json to the request and that could be deserialized and processed in golang. Using a mix of dgrijalva/jwt-go
and go-chi/jwtauth
allowed me to quickly prototype and keep most of the authentication process in the background. This was until I learned about the inherent insecure nature of JWT tokens being stored in JavaScript’s sessionstorage
and localstorage
, which we will come back to later.
For now, authentication was handled, and I proceeded to build out the basic functionality of a blog. This meant user profiles, which was minimized to:
type User struct {
Username string `json:"username",db:"username"`
Email string `json:"email",db:"email"`
Country string `json:"country",db:"country"`
Bio string `json:"bio",db:"bio"`
}
Keeping things simple and modular was good for the first iteration of this blog. I can add arbitrary data to the Bio
field until I feel like more fields are needed. The Email
and Username
fields are currently redundant, as they exist in the auth
package as well, and will probably be stripped out later in favor of foreign keys. The bulk of the work left was the blog
package. As the rest of this project, this was heavily influenced by the Django Framework’s handling of Posts
.
type BlogPost struct {
ID int `json:"id",db:"id"`
Title string `json:"title",db:"title"`
Slug string `json:"slug",db:"slug"`
Author string `json:"author",db:"author"`
Content string `json:"content",db:"content"`
TimePublished time.Time `json:"time_published", db:"time_published"`
Modified bool `json:"modified", db:"modified"`
TimeModified time.Time `json:"last_modified", db:"last_modified"`
}
Eventually, I may add a string field for backlinks, which would be useful for this post, and possibly other things as required. I wrote some simple routes to GET posts, and some routes to POST and PATCH.
func Routes() *chi.Mux {
r := chi.NewRouter()
r.Group(func(r chi.Router) {
r.Use(jwtauth.Verifier(TokenAuth))
r.Use(jwtauth.Authenticator)
r.Post("/", createBlogPost)
r.Patch("/by-id/{id}", updateBlogPostById)
})
r.Get("/", getBlogPosts)
r.Get("/by-slug/{slug}", getBlogPostBySlug)
r.Get("/by-id/{id}", getBlogPostById)
r.Get("/by-tag/{tag}", getBlogPostsByTag)
r.Get("/by-author/{author}", getBlogPostsByAuthor)
return r
}
In a few lines, I was able to declare the routes I would need, as well as apply authentication to certain routes using go-chi/jwtauth
. Being able to authenticate via cURL, and create new blog posts with cURL, I decided to start working on the frontend. I knew I would have to fiddle with the backend further along the process, but I was content with the base I had and decided to dive into my apprehensions and fears.
NodeJS, and how to show a website
JavaScript has always been a language I avoided. Between the hellhole that is npm
, my discomfort with visuals, and my lack of need in making websites, I was able to avoid it for the most part. I made some simple websites in the Web 1.0 days, and even then my aesthetic senses were sub par. Even now, the v1.0.0 of sudoscientist is very minimalistic. I prefer the simple look of white on black text in a terminal over most fancy graphics. NodeJS and JavaScript as a whole is a very different from most development related work I’ve done. The JavaScript world as a whole seems very different. The JavaScript world seems very framework heavy. Do you use bootstrap or semantic? Angular? Vue? Meteor? Ionic? My decision in picking React/Redux was straight forward, I have a good friend (thanks Mark!) who is a seasoned user and someone that I could turn to for help in my misadventures. If it wasn’t for that fact, I would probably still be trying to figure out which framework would work best for this project, and the other projects I want to pursue later. Like the decision to use Postgres, I wanted to make sure I could use the same methods in the future, and reuse as much code as possible.
Learning React/Redux, even now, feels very much like learning React/Redux and not learning JavaScript. There is a ton of knowledge I now have that very specific to one framework. I don’t think I would feel comfortable working with any other framework, and I don’t even feel comfortable with React itself yet. Regardless, the project’s goals were set to be straight forward. I would go public with the site once I was able to display the posts from my previous blog, and a v1.0.0 would be cut once I was able to post from the blog’s UI itself. This meant I would leave authentication until the end, and just work on getting RESTful interactions working with the backend, and displaying markdown.
While working with the frontend, I employed a similar pattern for building out the application. I first focused on getting the appropriate libraries installed, and then working with the browser console to figure out what would be needed to make REST requests against the backend. Once this was completed, the next task was actually displaying posts in the browser. This would become a challenging task because it involves understanding the difference between the states of React, and Redux, along with understanding the event loop of rendering in React. This lead me to a lot of confusion, between the commits of I have no idea what i’m doing :( and It makes more sense now, I finally understood the difference between states, and how to merge JavaScript Objects together to push them to the state and have React rerender the web page.
const initialState = {
entities: {},
};
const normalizeEntities = (entities, payloadData) => {
const entitiesMap = {}
payloadData.forEach(post => entitiesMap[post.id] = post)
return {...entities, ...entitiesMap}
}
export default (state = initialState, action) => {
switch (action.type) {
case 'FETCH_POSTS':
const mergedEntities = normalizeEntities(state.entities, action.payload)
return {...state, ...{entities: mergedEntities}}
...
This little bit of (magic) code allowed me to progress further and actually get blog posts in the browser. With this, I was content on publishing the blog with my old blog’s posts, and start working on the next steps, authentication and making POST requests.
Authentication, JWTs, and Cookies
Once I posted the blog, the next step would be to allow myself to make POST requests via the browser. This would first require me to set up some form of authentication. The initial revision of this would be very straightforward. Using go-chi/jwtauth
allowed to quickly add a few lines to set up a simple JWT verifier and authenticator around specific routes
r.Group(func(r chi.Router) {
r.Use(jwtauth.Verifier(TokenAuth))
r.Use(jwtauth.Authenticator)
r.Post("/", createBlogPost)
r.Patch("/by-id/{id}", updateBlogPostById)
})
While this was a simple solution, along with storing the JWT in localstorage
or sessionstorage
, I started reading security guidelines on how to handle secrets in the browser. Upon coming across the OWASP requirement that local or session storage should not contain sensitive information, I reevaluated my solution. The implementation that made most sense to me was that of split-cookie authentication. Implementing this on the backend is a fun task, and is always my first step, I started working on a system that would be able to function with cURL. This would require writing some middleware for go-chi/jwtauth
, which was as simple as:
func TokenFromSplitCookie(r *http.Request) string {
dataCookie, err := r.Cookie("DataCookie")
if err != nil {
return ""
}
signatureCookie, err := r.Cookie("SignatureCookie")
if err != nil {
return ""
}
cookie := dataCookie.Value + "." + signatureCookie.Value
return cookie
}
In addition to this, I added a function on the backend to set cookies in a secure manner:
func setCookies(w http.ResponseWriter, jwt string, expiration time.Time) string {
splitToken := strings.Split(jwt, ".")
dataCookie := http.Cookie{Name: "DataCookie", Value: strings.Join(splitToken[:2], "."), Expires: expiration, HttpOnly: false, Path: "/", Domain: ".sudoscientist.com", MaxAge: 360, Secure: true}
http.SetCookie(w, &dataCookie)
signatureCookie := http.Cookie{Name: "SignatureCookie", Value: splitToken[2], Expires: expiration, HttpOnly: true, Path: "/", Domain: ".sudoscientist.com", MaxAge: 360, Secure: true}
http.SetCookie(w, &signatureCookie)
return strings.Join(splitToken[:2], ".")
}
This bit of code would take the cookies passed up by the browser, of which the DataCookie was accessible to the JavaScript frontend, and the SignatureCookie would be HttpOnly, functionally disallowed any JavaScrpit from having the entire contents of the cookie, and would disallow updating the cookie since the signature would be invalid on altered data. Furthermore, since most of this was being handled by cookies, the process to incorporate getting data out of the was as simple as const datacookie = cookies.get('DataCookie');
. This allowed me to get the data required to render user information accurately in the UI. Finally, with this in place, I was able to have a secure authentication system, and all that was left was setting up a way to make new posts!
First POSTs and the future of sudoscientist
One thing that is understandable about the Node.js ecosystem is that the vast library of node modules is a bit of a necessity. There is a LOT of things that need to be done repeatedly to render data in a browser. One of these repeated functions is saving, loading, and rendering Markdown. In order to accomplish this in React, I ended up using React Markdown Editor. The code for this was as simple as:
<div className="markdown-body">
<ReactMde
value={content}
onChange={setContent}
selectedTab={selectedTab}
onTabChange={setSelectedTab}
generateMarkdownPreview={(markdown) =>
Promise.resolve(<ReactMarkdown source={markdown} />)}
/>
</div>
The react-mde
library made it simple to edit and view my markdown files, load it into a JSON blob, and POST it to the backend. This will also be leveraged for the PATCH function to update old posts in the future.
Looking forward, the next steps for the blog will be implementing the PATCH feature, maybe adding some regions for backlinks, figuring out and implementing database migrations, and finally comments. Once that is done, I think I will continue to work on other projects and document them on sudoscientist. These two initial blog posts were primarily retrospectives, and as such were written with me just referencing old git commits and piecing the story together. In the future I plan on writing down notes and actually document what I’m doing while I do it.
It’s a pleasure to have you reading, and thanks! Hope to see you in the next one.