{"pageProps":{"posts":[{"slug":"deploy-a-python-serverless-function-on-zeit-now","published":true,"title":"Deploy a Python serverless function on ZEIT Now","excerpt":"Let's write an API endpoint using a Python serverless function that will give us DNS records in JSON format from a given domain.","date":"2020-03-16T05:35:07.322Z","status":"In Progress","author":{"name":"Juan Olvera","picture":"/assets/blog/authors/tim.jpeg"},"ogImage":{"url":"/static/deploy-python-on-now.jpeg"},"changeLog":{"Thu Sep 19 2019 00:00:00 GMT+0000 (Coordinated Universal Time)":"First draft"},"content":"\nLet's write an API endpoint using a Python serverless function that will give us DNS records in JSON format from a given domain. We will send a GET request with a query parameter named \"domain\", i.e.,\n\n```\nGET https://localhost:3000/api?domain=\"jolvera.dev\"\n```\n\nThe tools we are going to use are a ZEIT account, npm, now-cli and a browser. I will assume that you already have npm and a browser installed.\n\nThe API that we are going to write is actually how I started the prototype of [DNS check](https://dnscheck.app). DNS check was born out of the necessity to explain clients and co-workers what's happening with a domain when we make a DNS change. Tools already exist that provide DNS lookup information, but most of them aren't simple enough, in my opinion.\n\nWe only need two dependencies for this project, [Bottle](https://bottlepy.org/docs/dev/) and [dnspython](http://www.dnspython.org/).\n\n## Create the Python function\n\nThe first step is to create a function that returns the DNS records (A, CNAME, MX, for example.) of a given domain name using dnspython.\n\n```python\nimport dns.resolver\n\nids = [\n \"A\",\n \"NS\",\n \"CNAME\",\n \"SOA\",\n \"MX\",\n \"TXT\",\n \"AAAA\",\n]\n\ndef resolve(domain: str):\n result = []\n for record in ids:\n try:\n answers = dns.resolver.query(domain, record)\n data = [{\"record\": record, \"value\": rdata.to_text()} \n for rdata in answers]\n for item in data:\n result.append(item)\n except Exception as e:\n print(e)\n return result\n\nprint(resolve('jolvera.dev'))\n```\n\nBefore creating any files in our computer, [let's try this function in Repl.it](https://repl.it/@jolvera/DNS-records). If everything is okay you will get something like:\n\n```js\n[\n {\n 'record': 'A',\n 'value': '3.19.25.128'\n },\n {\n 'record': 'A',\n 'value': '3.19.23.166'\n },\n {\n 'record': 'NS',\n 'value': 'b.zeit-world.org.'\n },\n {\n 'record': 'NS',\n 'value': 'e.zeit-world.net.'\n },\n {\n 'record': 'NS',\n 'value': 'f.zeit-world.com.'\n },\n {\n 'record': 'NS',\n 'value': 'a.zeit-world.co.uk.'\n },\n {\n 'record': 'MX',\n 'value': '10aspmx1.migadu.com.'\n },\n {\n 'record': 'TXT',\n 'value': '\"v=spf1 a mx include:spf.migadu.com ~all\"'\n }\n]\n```\n\nTry with different domains, make sure it works, then proceed to create the API endpoint.\n\n## Convert the function to a Serverless function\n\nThe serverless platform we are going to use is ZEIT Now. They make serverless development extremely easy and projects deployments a joy. \n\nTo follow this example, create an account at [zeit.co](http://zeit.co/) and install Now globally in your system by running `npm i -g now`, and with that you are ready to go.\n\n### Folder structure\n\nCreate a folder and call it `dnscheck` then create another folder inside called `api`.\n\n```bash\n$ mkdir dnscheck\n$ mkdir dnscheck/api\n```\n\nThis folder structure is all you need. \n\n### Virtual environment\n\nPython best practices say we need a virtual environment, so we don't install our dependencies globally in the system (Like what we did with `npm i -g now` but in the Python world).\nIn this example, I'm going to use `virtualenv`, but you can use [Poetry](https://poetry.eustace.io/) or [Pipenv](https://docs.pipenv.org/en/latest/) as well. I chose `virtualenv` here to keep things simple.\nInside the `dnscheck/api` folder, create a new virtual environment.\n\n```bash\n$ cd dnscheck/api\n(dnscheck/api) $ virtualenv venv\n```\n\nThen, activate the virtual environment.\n\n```bash\n(dnscheck/api) $ ./venv/bin/activate\n```\n\nWith this, we have our virtual environment ready to go, and we can start installing dependencies that are only available in our project. We start by installing `dnspython` and `bottle`.\n\n```bash\n(dnscheck/api) $ pip install bottle dnspython\n```\n\nThen we create a `requirements.txt` file that lists all the dependencies we are using. Now uses this to install the dependencies while it's deploying.\n\n```bash\n(dnscheck/api) $ pip freeze > requirements.txt\n```\n\n### API endpoint with Bottle\n\nInside the `api/` folder, create a file named [`index.py`](http://index.py/). Inside write the following code.\n\n```python\nfrom bottle import Bottle, request\nimport dns.resolver\n\napp = Bottle()\n\nids = [\n \"A\",\n \"NS\",\n \"CNAME\",\n \"SOA\",\n \"MX\",\n \"TXT\",\n \"AAAA\",\n]\n\n\n@app.get('/api')\ndef api():\n domain = request.query.get('domain')\n result = resolve(domain)\n return dict(data=result)\n \n\ndef resolve(domain: str):\n result = []\n for record in ids:\n try:\n answers = dns.resolver.query(domain, record)\n data = [{\"record\": record, \"value\": rdata.to_text()} \n for rdata in answers]\n for item in data:\n result.append(item)\n except Exception as e:\n print(e)\n return result\n```\n\nThis code listens for GET requests in `http://localhost:3000/api` and tries to grab a query parameter named `domain`, so to be able to get the data we have to send a get request to [http://localhost:3000/api?domain=jolvera.dev](http://localhost:3000/api?domain=jolvera.dev) or simply open the URL in the browser.\n\nTo test it locally, inside the `dnscheck/` folder run `now dev`,\n\n```bash\n(dnscheck/api) $ cd ..\n(dnscheck) $ now dev\n```\n\nif everything is okay, you should get the development URL.\n\n```bash\n> Ready! Available at http://localhost:3000\n```\n\nIf so, go to your browser and visit [http://localhost:3000/api?domain=jolvera.dev](http://localhost:3000/api?domain=jolvera.dev) and you should see a response in JSON like shown in the following screenshot.\n\n![Screenshot of the Python function in the browser](/static/dnscheck-screenshot.png)\n\n### Deploy to Now\n\nEverything is ready to deploy our serverless function to Now and share it to the world!\n\nInside the `dnscheck/` folder run `now` \n\n```bash\n(dnscheck) $ now\n```\n\nand If everything goes okay, we will get a URL like [https://dnscheck-example-2k7pyj3cr.now.sh/](https://dnscheck-example-2k7pyj3cr.now.sh/), in my case I got [https://dnscheck-post-example.jolvera.now.sh/](https://dnscheck-post-example.jolvera.now.sh/), and you can test my domain going to [https://dnscheck-post-example.jolvera.now.sh/api?domain=jolvera.dev](https://dnscheck-post-example.jolvera.now.sh/api?domain=jolvera.dev).\n\n[See the project in action.](https://i.imgur.com/t7l26yB.gifv)\n\n## Conclusion\n\nWe were able to prototype a small idea into a workable API and share it on the web with ease. What we did in this post was the premise of my project and the train of thought I had to get something working. After I got the example working, I kept trying new ideas on it and added domain validation, a Next.js frontend and more.\n\nBeing able to prototype an idea into an API with these few tools encourage me to try new things more often. Once you get a first working example you will get motivate to keep going and build lots of things.\nYou can check my finished project dnscheck.app in these two repositories:\n\n- [DNS Check frontend](https://github.com/j0lv3r4/dnscheck-frontend)\n- [DNS Check Serverless function](https://github.com/j0lv3r4/dnscheck)\n\nAlso published in:\n\n- DEV.to\n\n"},{"slug":"good-second-drafts","published":true,"title":"Publishing good second drafts","excerpt":"I’ve been trying to improve the amount of writing I publish. I write a lot, but mostly rough ideas in a notebook. I followed every tip online to write more, and they worked, but now I don’t have content I’d like to publish in my blog, yet. So, the problem wasn’t quantity but quality. Or so I thought.","date":"2019-07-25","status":"Draft","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/publishing-good-second-drafts.jpeg"},"changeLog":{"Thu Jul 25 2019 00:00:00 GMT+0000 (Coordinated Universal Time)":"First draft"},"content":"\nI’ve been trying to improve the amount of writing I publish. I write a lot, but mostly rough ideas in a notebook. I followed every tip online to write more, and they worked, but now I don’t have content I’d like to publish in my blog, yet. So, the problem wasn’t quantity but quality. Or so I thought.\n\nI don’t write for SEO purposes. I don’t care if my website gets tons of visits, because my website is a form of self-documentation. I want to write about what and how I learned it and put it out there in case someone finds it useful.\n\nPublishing revised, edited, quality content is okay for a magazine site, or a serious publication, but for a personal website, I decided that publishing drafts is okay too. The web facilitates that. Put out there a work-in-progress post, get feedback, and update as needed.\n\nThis is not a revolutionary idea. Anne Lammot in her book “Bird by Bird” writes about shitty first drafts, which are what you do when you want to throw whatever you have in your head about a particular topic in paper or your text editor. Random thoughts, senseless paragraphs, stuff you can put together in better shape later, these are part of your shitty first draft.\n\n
\n

All good writers write them. This is how they end up with good second drafts and terrific third drafts.

\n \n
\n\nI read her book last year, and I’ve kept this idea of publishing drafts in my blog in my head for a while. This year I came upon [gwern’s website](https://www.gwern.net/) where he uses tags to describe the state of his writings.\n\n
\n

The \"status\" tag describes the state of completion: whether it’s a pile of links & snippets & \"notes\", or whether it is a \"draft\" which at least has some structure and conveys a coherent thesis, or it’s a well-developed draft which could be described as \"in progress\", and finally when a page is done—in lieu of additional material turning up—it is simply \"finished\".

\n \n
\n\nWith these two ideas together, I finally had a concept (shitty first drafts) and an implementation (status tags) to publish drafts in my blog. Even though I still don’t want to put shitty first drafts out there, I’m okay with putting good second ones. This concept gives me more freedom on making progress in my content and less pressure to put it out there. Also, it’s open to feedback in an early state.\n\nI’ll try to limit myself from keeping the blog full of drafts. Maybe I’ll keep it at three-four drafts maximum. Also, I want to keep track of the changes, so I’ll somehow keep a changelog per post to document the progress.\n\nThis is my latest attempt to publish more content mostly for self-documentation and journal my experiences. Instead of doing short rants on social media, I expect to put my thoughts in a place I own."},{"slug":"accessible-read-more-links","published":true,"title":"Accessible \"Read More\" links","excerpt":"A pattern you probably have seen in various blogs is the “Read more” link. The design usually has the title first, then a small excerpt of the content and a “Read more” text link to the full post. ","date":"2019-06-18","status":"Draft","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/site-feature.png"},"changeLog":{"Thu Jun 13 2019 00:00:00 GMT+0000 (Coordinated Universal Time)":"First draft"},"content":"\n\"Learn More\" Links are part of a pattern we see in Blogs. The design has the title first, then a content excerpt and a text link that takes us to the post page with the full content.\n\n![Example of a Blog showing a Read more link](/static/read-more-link-example.png)\n\nBlogs have an index page with a list of posts that makes it easier for the general public to choose what to read. This is simple enough the general public, but it gets complicated with people with disabilities. In this specific case, people that use screen readers.\n\nThe fix for this issue is so simple, yet I see it everywhere, on big company websites or simple personal blogs.\n\nBefore I started writing this post, I checked ten blogs—some of them with an excellent reputation—and eight had an inaccessible “Read more” link.\n\nHow can we know if a “Read more” link is inaccessible?\n\n## Screen reader usage\n\nScreen readers generate a list of links without context to help the user map the content of the page. Also, screen reader users usually navigate a page by using the tab key to jump from link to link without reading the content. So you can expect that having a list of “Read more” links is useless and adds redundancy in this context, e.g.,\n\n```markdown\n|- Title for a blog post\n|- Read more\n|- Title for a second blog post\n|- Read more\n|- Third blog post\n|- Read more\n|- Another blog post\n|- Read more\n```\n\n[Also watch “Screen reader demo: don't use 'click here' for links” in YouTube](https://www.youtube.com/watch?v=zGa_rIK1itA)\n\n### Real world examples\n\nLet’s see some examples and inspect the code they use.\n\n#### Example 1\n\n![Another image showing a website using a “Continue Reading” link](/static/read-more-link-example-2.png)\n\nIn this example, the link is a big button acting as a call to action, but the button by itself is lacking context. Continue reading… what?\n\n```html\n\n```\n\n#### Example 2\n\n![Website using a “Read More” link](/static/read-more-link-example-3.png)\n\nIn this example we have something similar, but with a twist. We have markup with the right intention to show some text to screen readers, but it’s used wrong.\n\nIn this example we have something similar, but with a twist. We have markup with the right intention to show some text to screen readers, but it’s used wrong.\n\nThe `span` element you see with the class `screen-reader-text` is supposed to be used as an alternative text for screen readers. That part of the HTML should have meaningful text like “Read More about free duotone photoshop gradient presets.” But, even if it had it, the screen reader would read both parts, adding redundancy and confusing the user.\n\n```html\n\n Read More\n Read More\n\n```\n\n#### Example 3\n\n![Website using a “continue reading” link](/static/read-more-link-example-4.png)\n\nWe are very close! The developer used the `title` attribute to add an alternative text to the content inside the link. Unfortunately, [the `title` attribute is not accessible for everyone.](https://developer.paciellogroup.com/blog/2010/11/using-the-html-title-attribute/)\n\n```html\n\n continue reading\n\n```\n\n## Solutions\n\nIn these examples, we have user interface helpers to make it easier for the general public to access the content, but for screen reader users we are adding redundancy by having a useful link and a link without context reading “Read More” or “Continue Reading.”\n\n### No more read more\n\nThere are a few ways to improve this. The first solution would be to remove the “Read more” link altogether, the way [A list Apart](https://alistapart.com/) and [David Walsh](https://davidwalsh.name) do it. For me, this is the best solution, we remove duplicated code, redundancy, and keep the user interface concise.\n\nBut there are a few solutions in case you want to keep your “Read more” buttons.\n\n### Meaningful text link\n\nOne solution is to keep the “Read more” link but add descriptive keywords or context to the link. Instead of having just “Read more,” you would do something like:\n\n```html\n\n Read Everything You Need to Know About Date in JavaScript\n\n```\n\nWith this fix, we attach meaning to the screen reader users, but we hit another issue, we added redundancy to the general public. Also, the link is way too long.\n\n### The `aria-label` attribute\n\nThe optimal solution, in my opinion, if you want to keep the “Read more” link, is to keep the text short but add a meaningful ARIA label describing the action in the link.\n\nThis will announce to the screen reader exactly where the link will take them.\n\n```html\n\n Read article\n\n```\n\nWe keep the link short and concise for both general public and screen reader users by removing redundancy in the text and make it explicit with the ARIA label.\n\n[Watch the difference between a meaningless link and one with the `aria-label` attribute in a YouTube video.](https://www.youtube.com/watch?v=1Zb5MW_nkLI)\n\n## Conclusion\n\nIf you can, remove the “Read more” link and keep the heading or title of the post as the primary link, if you must have the “Read more” link for any reason, use the `aria-label` attribute to add context for screen reader users.\n\nThese changes might look like nitpicking, but these small improvements are a huge help with people that use assistive technology. It doesn’t take much time to do it, and the benefits are worth it.\n"},{"slug":"rebuilding-my-blog-with-nextjs","published":true,"title":"Rebuilding my blog with Next.js","excerpt":"I use my website for hacking with new technologies more than writing content. This time I’m rebuilding it using Next.js and if you’re reading this, I already deployed the first version.","date":"2019-05-13","status":"In Progress","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/rebuilding-my-blog-with-nextjs.jpeg"},"changeLog":{"Tue Oct 29 2019 00:00:00 GMT+0000 (Coordinated Universal Time)":"Add reference to script generating the JSON feed in build time and fix grammar issues."},"content":"\nI use my website to try new technologies more than writing content, this means that I’ve rebuilt my site more times than I’ve written articles. This time I’m rebuilding it using Next.js, and if you’re reading this, I already deployed the first version.\n\n## Why Next.js\n\nWith the rise of React and the SSR frameworks, e.g., Gatsby and Next.js, I saw a lot of beautiful, fast blogs built with Gatsby. [You probably have already seen Dan’s](https://overreacted.io/). Of course, I went and tried to set up my own, but the thing I didn’t like was that Gatsby blog-starter requires GraphQL, which is good, but I don’t think I need that for a simple blog.\n\nSo with the bad habit I have of wanting to build my version of everything, I started to investigate how to build a blog with Next.js.\n\n## How\n\nI got familiar with Next.js by contributing a couple of examples and other small changes. I’m also currently building two products with it, so I felt confident that I could build something decent.\n\n### Checklist\n\nI started with an empty project using [create-next-app](https://github.com/segment-open-source-transfer/create-next-app), and from there, I made a list of the features I needed.\n\n1. Generate a list of posts from [MDX](https://mdxjs.com/) files\n2. Add syntax highlighting\n3. Add an RSS feed\n4. Add pagination\n\nFor a Next.js project, all these things were new for me, and I had no idea how to implement them. So, I went to investigate how other people are doing it.\n\nGuillermo Rauch ([@rauchg](https://twitter.com/rauchg)) told me that Max Stoiber’s site ([@mxstbr](https://twitter.com/mxstbr)) was one of his favorite blogs built with Next.js out there, so I looked at his [GitHub repo](https://github.com/mxstbr/mxstbr.com).\n\nMax has 80% of the features I need in his repository, so most of the credit for the work goes to him; his implementation is pretty smart and helped me a lot. I was fortunate to chat with him about it too. \n\n### 1. Generate a list of posts from MDX files\n\nThe goal is to use Node to read the contents of a folder and get a list of files, then for each file, create a JavaScript Object with `title`, `content`, and other metadata.\n\nTo use the file system module at build-time, Max used [babel-plugin-preval/macro](https://github.com/kentcdodds/babel-plugin-preval); this package lets you run Node code dynamically in a client context and export the results.\n\nWith [Max’s implementation](https://github.com/mxstbr/mxstbr.com/blob/master/pages/thoughts/index.js) we can import the list of posts and iterate through the `Object`, e.g.,\n\n```jsx{1,6,7,8,9,10,11,12}\nimport blogposts from \"../../data/blog-posts\"; \n\nfunction Blog() {\n return (\n
    \n {blogposts.map((post, index) => (\n
  • \n {post.title}\n \n\n {post.summary}\n
  • \n ))}\n
\n );\n}\n\nexport default Blog;\n```\n\n### 2. Syntax highlighting\n\nSyntax highlighting was harder than I thought. Basic syntax highlighting worked with [rehype-prism](https://github.com/mapbox/rehype-prism), but one key feature was missing, the ability to highlight a line of code, e.g.,\n\n```jsx{6,7,8,9}\nimport React, { useState, useEffect, useRef } from 'react';\n\nfunction counter() {\n let [count, setCount] = useState(0);\n\n setInterval(() => {\n // Look, I'm highlighted!\n setCount(count + 1);\n }, 1000);\n\n return

{count}

;\n}\n```\n\nAdding line highlighting was probably the hardest part of the process. To implement it, I went to a rabbit hole of learning about [unified.js](https://unified.js.org/) and how syntax trees work. I had to understand how the Gatsby team and contributors implemented their own and how to plug it into the MDX plugin interface.\n\nI stole code got inspiration from these packages:\n\n- [gatsby-remark-prismjs](https://github.com/gatsbyjs/gatsby/tree/master/packages/gatsby-remark-prismjs)\n- [@mapbox/rehype-prism](https://github.com/mapbox/rehype-prism)\n- [refractor](https://github.com/wooorm/refractor)\n\nI won’t go in detail, but I integrated code from those three packages to get syntax highlighting working along with the line highlighting feature. Other features are missing but got it’s working for now. [Since I’ve created a fork that adds line highlighting to rehype-prism](https://github.com/j0lv3r4/mdx-prism). \n\n### 3. RSS feed\n\nThis feature was easy. I followed Max’s lead in using the [JSON feed spec](https://jsonfeed.org) and reformatting the blog post `Object` into a JSON feed.\n\nI still haven’t figure out how to create the JSON file on build time, so, for now, I’m running the node script before committing changes to generate the feed and routing it as a static file.\n\nAn npm script runs a [Node function](https://github.com/j0lv3r4/jolvera.dev/blob/master/posts/rss-feed.js) that generates the feed as a static file when it’s deploying in Now.\n\n```js{6,7}\n// package.json\n\n \"scripts\": {\n \"dev\": \"next\",\n \"build\": \"next build\",\n \"build:rss\": \"node ./.next/serverless/posts/rss-feed.js\",\n \"now-build\": \"next build && yarn build:rss\",\n \"start\": \"next start\",\n \"test\": \"jest\"\n },\n```\n\n### 4. Pagination\n\nThis feature was easy, too. I used the [pagination library](https://www.npmjs.com/package/pagination) with the [blog posts `Object`’s length and other options as input](https://github.com/j0lv3r4/jolvera.dev/blob/master/pages/blog.js#L13-L18).\n\n## Development environment\n\nOne of the benefits of using Next.js with Now 2.0 is the [`now dev`](https://zeit.co/blog/now-dev) command. You get to see what you will get in production. It uses the same `now.json` configuration file and pretty much everything else works the same way.\n\n## Deployment\n\nAfter all the work, I got the project into a good-enough working blog using Next.js. At this point, I was really excited to have it deployed, and that [Saturday night I decided to launch it using Now 2.0](https://twitter.com/_jolvera/status/1127431569042550784?s=20).\n\n## Conclusion\n\nThe website feels fast. The [Lighthouse audit](https://twitter.com/_jolvera/status/1127432136565383169?s=20) results are amazing. The easiness of adding content feels as if you were dealing with a CMS, except there’s no login.\n\nThe SSR and pre-fetching features Next.js provides makes the site feel very smooth, fast and responsive.\n\nSo far I’m pleased with the experience of developing with Next.js and Now. I will submit a pull request to the Next.js repository to add the blog as an example, and I hope somebody will find this work useful as I found Max’s.\n\nAlso published in DEV.to."},{"slug":"user-authentication-with-nextjs","published":true,"title":"User Authentication with Next.js","excerpt":"User authentication with Next.js has been one of the most requested examples by the community. The GitHub issue had more than 300 likes and hundreds of comments with recommendations and proposals.","date":"2019-02-20","status":"Finished","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/auth-nextjs.jpg"},"changeLog":null,"content":"\nUser authentication with Next.js has been one of the most requested examples by the community. [The GitHub issue](https://github.com/zeit/next.js/issues/153) had more than 300 likes and hundreds of comments with recommendations and proposals.\n\nThe issue asked the community to contribute an example with certain requirements:\n\n- re-usable authentication helper across pages\n- session synchronization among tabs\n- simple passwordless email backend hosted on `now.sh`\n\nThe primary purpose of this example was to have a starting point for newcomers.\n\nWith the release of [Next.js 8](https://nextjs.org/blog/next-8/) an example was finally accepted and merged into the [examples repository](https://github.com/zeit/next.js/tree/canary/examples/with-cookie-auth). In this post, we will create the example from scratch.\n\n_You can find the code in the [Next.js examples repository](https://github.com/zeit/next.js/tree/canary/examples/with-cookie-auth) or play with the [working demo deployed in Now](https://nextjs-with-cookie-auth.now.sh) 2._\n\n- Project Setup\n- Backend\n- Frontend\n\n - Login Page and Authentication\n - Profile Page and Authorization\n\n - Authorization Helper Function\n - Authorization High Order Component\n - Page Component with Authorized requests\n\n - Logout and Session Synchronization\n- Deploy to Now 2\n- Local Development\n- Conclusion\n\n## Project setup\n\nWe'll set up the project as a [monorepo](https://zeit.co/examples/monorepo/) with the recommended folder structure along with a `now.json` file so that we can deploy it to Now.\n\n```bash\n$ mkdir project\n$ cd project\n$ mkdir www api\n$ touch now.json\n```\n\n## Backend\n\nWe will use `micro` to handle our incoming requests and `isomoprhic-unfetch` to make our outoing API requests.\n\n```bash\n$ cd api\n$ npm install isomorphic-unfetch micro --save\n```\n\nTo simplify our example, we'll use the GitHub API as a passwordless backend. Our backend will call the `/users/:username` endpoint and retrieve the users’ `id`, then from now on, this `id` will be our token.\n\nIn our app, we'll create two functions that will work as endpoints: `login.js` to return a token, and `profile.js` to return the user information from a given token.\n\n```js\n// api/login.js\n\nconst { json, send, createError, run } = require(\"micro\");\nconst fetch = require(\"isomorphic-unfetch\");\n\nconst login = async (req, res) => {\n const { username } = await json(req);\n const url = `https://api.github.com/users/${username}`;\n\n try {\n const response = await fetch(url);\n if (response.ok) {\n const { id } = await response.json();\n send(res, 200, { token: id });\n } else {\n send(res, response.status, response.statusText);\n }\n } catch (error) {\n throw createError(error.statusCode, error.statusText);\n }\n};\n\nmodule.exports = (req, res) => run(req, res, login);\n```\n\n```js\n// api/profile.js\n\nconst { send, createError, run } = require(\"micro\");\nconst fetch = require(\"isomorphic-unfetch\");\n\nconst profile = async (req, res) => {\n if (!(\"authorization\" in req.headers)) {\n throw createError(401, \"Authorization header missing\");\n }\n\n const auth = await req.headers.authorization;\n const { token } = JSON.parse(auth);\n const url = `https://api.github.com/user/${token}`;\n\n try {\n const response = await fetch(url);\n\n if (response.ok) {\n const js = await response.json();\n // Need camelcase in the frontend\n const data = Object.assign({}, { avatarUrl: js.avatar_url }, js);\n send(res, 200, { data });\n } else {\n send(res, response.status, response.statusText);\n }\n } catch (error) {\n throw createError(error.statusCode, error.statusText);\n }\n};\n\nmodule.exports = (req, res) => run(req, res, profile);\n```\n\nWith this, we have everything we need to handle our simplified Authentication/Authorization strategy in the backend.\n\n## Frontend\n\nNow, inside our `www/` folder, we need to install our Next.js app and dependencies,\n\n```bash\n$ cd www/\n$ npm create-next-app .\n$ npm install\n$ npm install isomorphic-unfetch next-cookies js-cookie --save\n```\n\ncreate our pages,\n\n```bash\n$ touch pages/index.js\n$ touch pages/profile.js\n```\n\nthe file that will contain our authentication helpers,\n\n```bash\n$ mkdir utils\n$ touch utils/auth.js\n```\n\nand the file that will contain our custom server for local development. We'll need this later to replicate the monorepo setup locally.\n\n```bash\n$ touch server.js\n```\n\nAt this point, our `www/` folder structure should look like this.\n\n```bash\n.\n├── components\n│ ├── header.js\n│ └── layout.js\n├── package-lock.json\n├── package.json\n├── pages\n│ ├── index.js\n│ ├── login.js\n│ └── profile.js\n├── server.js\n└── utils\n └── auth.js\n```\n\nOur frontend structure is ready.\n\n### Login Page and Authentication\n\nThe login page will contain the form that will authenticate our users. The form will send a POST request to the `/api/login.js` endpoint with a username, then if the username exists the backend will return a token.\n\nFor this example, as long as we keep this token in the frontend, we can say that the user has an active session.\n\n```jsx\n// www/pages/login.js\n\nimport { Component } from \"react\";\nimport fetch from \"isomorphic-unfetch\";\nimport Layout from \"../components/layout\";\nimport { login } from \"../utils/auth\";\n\nclass Login extends Component {\n static getInitialProps({ req }) {\n const protocol = process.env.NODE_ENV === \"production\" ? \"https\" : \"http\";\n\n const apiUrl = process.browser\n ? `${protocol}://${window.location.host}/api/login.js`\n : `${protocol}://${req.headers.host}/api/login.js`;\n\n return { apiUrl };\n }\n\n constructor(props) {\n super(props);\n\n this.state = { username: \"\", error: \"\" };\n this.handleChange = this.handleChange.bind(this);\n this.handleSubmit = this.handleSubmit.bind(this);\n }\n\n handleChange(event) {\n this.setState({ username: event.target.value });\n }\n\n async handleSubmit(event) {\n event.preventDefault();\n const username = this.state.username;\n const url = this.props.apiUrl;\n\n try {\n const response = await fetch(url, {\n method: \"POST\",\n headers: { \"Content-Type\": \"application/json\" },\n body: JSON.stringify({ username })\n });\n if (response.ok) {\n const { token } = await response.json();\n login({ token });\n } else {\n console.log(\"Login failed.\");\n // https://github.com/developit/unfetch#caveats\n let error = new Error(response.statusText);\n error.response = response;\n return Promise.reject(error);\n }\n } catch (error) {\n console.error(\n \"You have an error in your code or there are Network issues.\",\n error\n );\n throw new Error(error);\n }\n }\n\n render() {\n return (\n \n
\n
\n \n\n \n\n \n\n

\n {this.state.error && `Error: ${this.state.error}`}\n

\n \n
\n \n
\n );\n }\n}\n\nexport default Login;\n```\n\nOur `getInitialProps()` will generate a URL based on the environment we are and by checking if we are in the browser or the server.\n\nThe first line will set the protocol to `https` or `https` depending on the environment.\n\n```js\n...\nconst protocol = process.env.NODE_ENV === 'production' ? 'https' : 'http'\n...\n```\n\nNext, we get our `host` depending on whether we are in the browser or the server. This way, we will get the right URL even if we are in Now with a dynamically generated URL or in our local development using `http://localhost:3000`.\n\n```js\n...\nconst apiUrl = process.browser\n ? `${protocol}://${window.location.host}/${endpoint}`\n : `${protocol}://${req.headers.host}/${endpoint}`;\n...\n```\n\nEverything else is pretty standard with a form that makes a POST request on submission. We also use the local state to handle our simple validation error messages.\n\nIf our request is successful, we'll log in our user by saving the cookie with the token we got from the API, and redirect the user to our profile page.\n\n```js\n...\ncookie.set(\"token\", token, { expires: 1 });\nRouter.push(\"/profile\")\n...\n```\n\n### Profile Page and Authorization\n\nWith client-only SPAs, to Authenticate or Authorize a user, we have to let them request the page, load the JavaScript and then send a request to the server to verify the user’s session. Fortunately, Next.js gives us SSR, and we can check the user’s session on the server using `getInitialProps();`.\n\n#### Authorization Helper Function\n\nBefore creating our profile page, we'll create a helper function in `www/utils/auth.js` that will restrict access to Authorized users.\n\n```js\n// www/utils/auth.js\n\nimport Router from \"next/router\";\nimport nextCookie from \"next-cookies\";\n\nexport const auth = ctx => {\n const { token } = nextCookie(ctx);\n\n if (ctx.req && !token) {\n ctx.res.writeHead(302, { Location: \"/login\" });\n ctx.res.end();\n return;\n }\n\n if (!token) {\n Router.push(\"/login\");\n }\n\n return token;\n};\n```\n\nWhen a user loads the page, the function will try to get the token from the cookie using `nextCookie`, then if the session is invalid it will redirect the browser to the login page, otherwise Next.js will render the page normally.\n\n```js\n// Implementation example\n...\nProfile.getInitialProps = async ctx => {\n // Check user's session\n const { token } = auth(ctx);\n\n return { token }\n}\n...\n```\n\nThis helper is simple enough for our example and works on the server and the client. Optimally, we want to restrict access on the server, so we don't load unnecessary resources.\n\n#### Authorization High Order Component\n\nAnother way to abstract this, is using a HOC that we can use in our restricted pages like Profile. We could use it like this:\n\n```jsx\nimport { withAuthSync } from \"../utils/auth\";\n\nconst Profile = props =>
If you can see this, you are logged in.
;\n\nexport default withAuthSync(Profile);\n```\n\nAlso, it will be useful later for our loggout functionality. Like so, we write our HOC the standard way and include our `auth` helper function to take care of the Authorization.\n\nWe create our HOC in our `auth.js` file as well.\n\n```jsx\n// Gets the display name of a JSX component for dev tools\nconst getDisplayName = Component =>\n Component.displayName || Component.name || \"Component\";\n\nexport const withAuthSync = WrappedComponent =>\n class extends Component {\n static displayName = `withAuthSync(${getDisplayName(WrappedComponent)})`;\n\n static async getInitialProps(ctx) {\n const token = auth(ctx);\n\n const componentProps =\n WrappedComponent.getInitialProps &&\n (await WrappedComponent.getInitialProps(ctx));\n\n return { ...componentProps, token };\n }\n\n render() {\n return ;\n }\n };\n```\n\n#### Page Component with Authorized requests\n\nOur profile page will show our GitHub avatar, name and bio. To pull this data from our API, we need to send an Authorized request. Our API will throw an error if the session is invalid and if so we will redirect our user to the login page.\n\nWith this, we create our restricted profile page with the authorized API calls.\n\n```jsx\n// www/pages/profile.js\n\nimport Router from \"next/router\";\nimport fetch from \"isomorphic-unfetch\";\nimport nextCookie from \"next-cookies\";\nimport Layout from \"../components/layout\";\nimport { withAuthSync } from \"../utils/auth\";\n\nconst Profile = props => {\n const { name, login, bio, avatarUrl } = props.data;\n\n return (\n \n \"Avatar\"\n

{name}

\n

{login}

\n

{bio}

\n\n \n
\n );\n};\n\nProfile.getInitialProps = async ctx => {\n // We use `nextCookie` to get the cookie and pass the token to the\n // frontend in the `props`.\n const { token } = nextCookie(ctx);\n const protocol = process.env.NODE_ENV === \"production\" ? \"https\" : \"http\";\n\n const apiUrl = process.browser\n ? `${protocol}://${window.location.host}/api/profile.js`\n : `${protocol}://${ctx.req.headers.host}/api/profile.js`;\n\n const redirectOnError = () =>\n process.browser\n ? Router.push(\"/login\")\n : ctx.res.writeHead(301, { Location: \"/login\" });\n\n try {\n const response = await fetch(apiUrl, {\n credentials: \"include\",\n headers: {\n \"Content-Type\": \"application/json\",\n Authorization: JSON.stringify({ token })\n }\n });\n\n if (response.ok) {\n return await response.json();\n } else {\n // https://github.com/developit/unfetch#caveats\n return redirectOnError();\n }\n } catch (error) {\n // Implementation or Network error\n return redirectOnError();\n }\n};\n\nexport default withAuthSync(Profile);\n```\n\nWe send our `GET` request to our API with the `credentials: \"include\"` option to make sure our header `Authorization` is sent with our token in it. With this, we make sure our API gets what it needs to authorize our request and return the data.\n\n### Logout and Session Synchronization\n\nIn our frontend, to log out the user, we need to clear the cookie and redirect the user to the login page. We add a function in our `auth.js` file to do so.\n\n```js\n// www/auth.js\n\nimport cookie from \"js-cookie\";\nimport Router from \"next/router\";\n\nexport const logout = () => {\n cookie.remove(\"token\");\n Router.push(\"/login\");\n};\n```\n\nEvery time we need to log out our user we call this function, and it should take care of it. However, one of the requirements was session synchronization, that means that if we log out the user, it should do it from all the browser tabs/windows. To do this we need to listen to a global event listener, but instead of setting something like a custom event we will use [storage event](https://developer.mozilla.org/en-US/docs/Web/Events/storage).\n\nTo make it work we would have to add the event listener to all the restricted pages `componentDidMount` method, so instead of doing it manually, we'll include it in our [withAuthSync HOC](https://reactjs.org/docs/higher-order-components.html).\n\n```jsx\n// www/utils/auth.js\n\n// Gets the display name of a JSX component for dev tools\nconst getDisplayName = Component =>\n Component.displayName || Component.name || \"Component\";\n\nexport const withAuthSync = WrappedComponent =>\n class extends Component {\n static displayName = `withAuthSync(${getDisplayName(WrappedComponent)})`;\n\n static async getInitialProps(ctx) {\n const token = auth(ctx);\n\n const componentProps =\n WrappedComponent.getInitialProps &&\n (await WrappedComponent.getInitialProps(ctx));\n\n return { ...componentProps, token };\n }\n\n // New: We bind our methods\n constructor(props) {\n super(props);\n\n this.syncLogout = this.syncLogout.bind(this);\n }\n\n // New: Add event listener when a restricted Page Component mounts\n componentDidMount() {\n window.addEventListener(\"storage\", this.syncLogout);\n }\n\n // New: Remove event listener when the Component unmount and\n // delete all data\n componentWillUnmount() {\n window.removeEventListener(\"storage\", this.syncLogout);\n window.localStorage.removeItem(\"logout\");\n }\n\n // New: Method to redirect the user when the event is called\n syncLogout(event) {\n if (event.key === \"logout\") {\n console.log(\"logged out from storage!\");\n Router.push(\"/login\");\n }\n }\n\n render() {\n return ;\n }\n };\n```\n\nThen, we add the event that will trigger the log out on all windows to our `logout` function.\n\n```js\n// www/utils/auth.js\n\nimport cookie from \"js-cookie\";\nimport Router from \"next/router\";\n\nexport const logout = () => {\n cookie.remove(\"token\");\n // To trigger the event listener we save some random data into the `logout` key\n window.localStorage.setItem(\"logout\", Date.now()); // new\n Router.push(\"/login\");\n};\n```\n\nFinally, because we added this functionality to our Authentication/Authorization HOC, we don't need to change anything in our Profile page.\n\nNow, every time our user logs out, the session will be synchronized across all windows/tabs.\n\n## Deploy to Now 2\n\nThe only thing left is to write our configuration in our `now.json` file.\n\n```js\n// now.json\n\n{\n \"version\": 2,\n \"name\": \"cookie-auth-nextjs\", //\n \"builds\": [\n { \"src\": \"www/package.json\", \"use\": \"@now/next\" },\n { \"src\": \"api/*.js\", \"use\": \"@now/node\" }\n ],\n \"routes\": [\n { \"src\": \"/api/(.*)\", \"dest\": \"/api/$1\" },\n { \"src\": \"/(.*)\", \"dest\": \"/www/$1\" }\n ]\n}\n```\n\nThe configuration file tells Now how to route our requests and what builders to use. You can read more about it in the [Deployment Configuration (now.json)](https://zeit.co/docs/v2/deployments/configuration) page.\n\n## Local Development\n\nIn our API, the functions `profile.js` and `login.js` work correctly as [lambdas](https://zeit.co/docs/v2/deployments/concepts/lambdas/) when they are deployed in Now 2, but we can’t work with them locally as they are right now.\n\nWe can use them locally by importing the functions into a small server using basic routing. To accomplish this, we create a third file called `dev.js` that we'll use for local development only and install `micro-dev` as a development dependency.\n\n```bash\n$ cd api\n$ touch dev.js\n$ npm install micro-dev --save-dev\n```\n\n```js\n// api/dev.js\n\nconst { run, send } = require(\"micro\");\nconst login = require(\"./login\");\nconst profile = require(\"./profile\");\n\nconst dev = async (req, res) => {\n switch (req.url) {\n case \"/api/profile.js\":\n await profile(req, res);\n break;\n case \"/api/login.js\":\n await login(req, res);\n break;\n\n default:\n send(res, 404, \"404. Not found.\");\n break;\n }\n};\n\nexports.default = (req, res) => run(req, res, dev);\n```\n\nThe server will return the functions when a specific URLs is requested, this is a bit unconventional for routing, but it works for our example.\n\nThen, in our frontend, we'll use a custom server for our Next.js app that will proxy certain requests to our API server. For this, we'll use `http-proxy` as a development dependency,\n\n```bash\n$ cd www\n$ npm install http-proxy --save-dev\n```\n\n```js\n// www/server.js\n\nconst { createServer } = require(\"http\");\nconst httpProxy = require(\"http-proxy\");\nconst { parse } = require(\"url\");\nconst next = require(\"next\");\n\nconst dev = process.env.NODE_ENV !== \"production\";\nconst app = next({ dev });\nconst handle = app.getRequestHandler();\n\nconst proxy = httpProxy.createProxyServer();\nconst target = \"http://localhost:3001\";\n\napp.prepare().then(() => {\n createServer((req, res) => {\n const parsedUrl = parse(req.url, true);\n const { pathname, query } = parsedUrl;\n\n switch (pathname) {\n case \"/\":\n app.render(req, res, \"/\", query);\n break;\n\n case \"/login\":\n app.render(req, res, \"/login\", query);\n break;\n\n case \"/api/login.js\":\n proxy.web(req, res, { target }, error => {\n console.log(\"Error!\", error);\n });\n break;\n\n case \"/profile\":\n app.render(req, res, \"/profile\", query);\n break;\n\n case \"/api/profile.js\":\n proxy.web(req, res, { target }, error => console.log(\"Error!\", error));\n break;\n\n default:\n handle(req, res, parsedUrl);\n break;\n }\n }).listen(3000, err => {\n if (err) throw err;\n console.log(\"> Ready on http://localhost:3000\");\n });\n});\n```\n\nand the last step is to modify our `package.json` to run our custom server with `npm run dev`.\n\n```js\n// www/package.json\n\n...\n \"scripts\": {\n \"dev\": \"node server.js\",\n \"build\": \"next build\",\n \"start\": \"next start\"\n},\n...\n```\n\nWith this setup we can deploy it to Now 2 running `now` at the root folder or use it locally running `micro-dev dev.js -p 3001` inside the `api/` folder and `npm run dev` inside the `www/` folder.\n\n## Conclusion\n\nThis example is the result of going through the issue comments, proposals, code examples, blog posts, and existing implementations and extracting the best parts of each one.\n\nThe example ended being a minimal representation of how Authentication should work in the Frontend using Next.js, leaving out features you might need in a real-world implementation and third-party libraries that were strongly recommended like Redux and Apollo (with GraphQL). Also, the example is backend agnostic, making it easy to use with any language in the server.\n\nFinally, one of the many discussions was whether to use `localStorage` or cookies. The example uses cookies so we can share the token between the server and the client.\n\nAlso published in:\n- DEV.to\n"},{"slug":"personal-rules-for-using-the-internet","published":true,"title":"Personal rules for using the Internet","excerpt":"This list is a small set of rules I plan to follow to improve my Internet consumption. I put this together because I have been wasting time on my computer in my free time.","date":"2019-01-03","status":"Finished","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/site-feature.png"},"changeLog":null,"content":"\nThis list is a small set of rules I plan to follow to improve my Internet consumption. I put this together because I have been wasting time on my computer in my free time.\n\nFor example, one Friday night I found myself with some free time before going to bed—everyone was asleep for some reason. So I went to my office with no goal in mind, turned on my computer and started reading my Twitter feed, then switched to Hackernews and finished with watching memes on Reddit.\n\nBefore I noticed, I missed the time I wanted to be in bed and wasted an extra hour. I could have spent the time doing something productive like going for a walk, reading a book or working on one of my side projects.\n\nThat night I decided to write a set of rules that would remind me to be productive in some way the next time I have free time.\n\nThe list is a work in progress, but the rules I’ve got so far are:\n\n- Sit down only if you have the intention of:\n - Work\n - Create something\n - Contribute or help\n - Improve your skills or learn a new one\n- If you find yourself mindlessly scrolling STOP and:\n - Stretch and take a walk\n - Workout for 20 minutes\n - Read a chapter of a book\n - Write 750 words\n- Don't read the news out of boredom\n- Your attention is your most valuable asset, treat it like that\n- Never argue with someone on the Internet. No force of nature, facts, or dark magic can change the other person's opinion\n- Protect the privacy of the others as if it were yours\n\nThe last rule looks unrelated to the rest, but for me is essential to remember. If the list is going to help me be productive, I'm probably going to write code, and I want to keep the rule in mind.\n\nOf course, none of these are obligatory or absolute. The list should provide a set of rules I can use as a reference when I’m unsure of what to do in front of my computer.\n\nThe purpose is to be productive while doing something I enjoy in my free time.\n"},{"slug":"how-i-organize-my-sass-projects","published":true,"title":"How I organize my Sass projects","excerpt":"his is a basic writing on how I organize my Sass projects, mostly for self documentation. I have two structures; mid and small-size projects. I work mostly for small business, so there is no need to over engineer my Sass code.","date":"2015-06-16","status":"Finished","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/site-feature.png"},"changeLog":{"Thu Jun 13 2019 00:00:00 GMT+0000 (Coordinated Universal Time)":"First draft"},"content":"\nThis is a basic writing on how I organize my Sass projects, mostly for self documentation.\n\nI have two structures; mid and small-size projects. I work mostly for small business, so there is no need to over engineer my Sass code.\n\n### Small projects\n\n```bash\nplugins/\n |-- _colorpicker.scss\n |-- _normalize.scss\nutils/\n |-- _functions.scss # All mixins, functions, extends, etc.\nbase/\n |-- _base.scss # Base, typography and forms.\ncomponents/ # Micro components.\n |-- _buttons.scss\n |-- _dropdown.scss\n |-- _navigation.scss\nlayout/ # Macro components.\n |-- _grid.scss\n |-- _layout.scss # Header, container, footer, etc.\n |-- _sidebar.scss\npages/ # Specific pages styles.\n |-- _pages.scss # Home, contact, etc.\ntodo.scss # Styles that I can organize later.\nmain.scss # The manifest.\n```\n\n### Mid-size projects\n\n```bash\nplugins/\n |-- _colorpicker.scss\n |-- _normalize.scss\nutils/\n |-- _extends.scss\n |-- _functions.scss\n |-- _mixins.scss\n |-- _variables.scss\nbase/\n |-- _base.scss\n |-- _forms.scss\n |-- _typography.scss\ncomponents/ # Micro components.\n |-- _buttons.scss\n |-- _dropdown.scss\n |-- _feedback.scss\n |-- _navigation.scss\n |-- _tabs.scss\nlayout/ # Macro components.\n |-- _container.scss\n |-- _footer.scss\n |-- _grid.scss\n |-- _header.scss\n |-- _sidebar.scss\npages/ # Specific pages styles.\n |-- _contact.scss\n |-- _home.scss\ntodo.scss # Styles that I can organize later.\nmain.scss # The manifest.\n```\n\nIn both structures I use the same folders and each one have their own purpose.\n\n### `plugins/`\n\nWhen I don't use [Bower](http://bower.co) for third party libraries, this is the folder where they go.\n\n### `utils/`\n\nHere are the files that have functions, mixins, extends, or classes that I use more than once in different folders.\n\nNotice that if I use a mixin only once, I include it inside the file where is needed, for example I have a `button-generator` mixin that I only use on `_bottons.scss`, so I have it included in there only.\n\n### `base/`\n\nCore styles like `html`, `body`, `p` and `a` tags, also the `form` and `table`, for example. In other words, the foundation (not the framework) of the project.\n\n### `components/`\n\nHere are all the UI (User Interface) components that are required to use in the layout.\n\nFor example, we have the navigation that goes inside the header, or the buttons that goes almost on any part of the layout; header, main, footer, etc.\n\n### `layout/`\n\nHeader, footer, sidebar and other elements that makes the site structure.\n\nAs I mentioned before, the layouts are built with the help of the components. We can code the header using the `.site-logo`, `.site-navigation`, and `.social-icons` classes (from the `components/` folder) and make them work together inside the `_header.scss` file.\n\n### `pages/`\n\nPage specific styles, for example:\n\n- `_home.scss`\n- `_contact-us.scss`\n- `_portfolio-gallery.scss`\n\n### `main.scss`\n\nThe manifesto that include all the files to render the `style.css`.\n"},{"slug":"using-truecrypt-from-the-command-line-in-osx","published":true,"title":"Using TrueCrypt from the command line in OSX","excerpt":"If you *still* love TrueCrypt and like to keep all you workflow inside the command line like me, this small guide is for you.","date":"2015-04-01","status":"Finished","author":{"name":"Juan Olvera"},"ogImage":{"url":"/static/site-feature.jpg"},"changeLog":null,"content":"\nIf you _still_ love TrueCrypt and like to keep all you workflow inside the command line like me, this small guide is for you.\n\n### A comment before the installation\n\nAs you may know, on May 2014, [TrueCrypt](http://truecrypt.sourceforge.net/) developers announced that the the project was discontinued and will no longer receive any updates and fixes. But before this happened, [Kenneth White](https://twitter.com/kennwhite) and [Matthew Green](https://twitter.com/matthew_d_green) planned, crowd-sourced, and executed an [independent full-level](http://istruecryptauditedyet.com/) [security audit](http://blog.cryptographyengineering.com/2015/04/truecrypt-report.html).\n\n**TL;DR:** _\"The NCC audit found no evidence of deliberate backdoors, or any severe design flaws that will make the software insecure in most instances\"_.\n\nThis means that it's OK to keep using TrueCrypt 7.1a. This being said, let’s continue with the installation guide.\n\n### Installation\n\n1. Download the [**TrueCrypt 7.1a Mac OS X.dmg**](https://www.grc.com/misc/truecrypt/truecrypt.htm) file .\n2. Open the `.dmg` file and double click the `.mpkg` installer.\n\n### Yosemite Issue\n\nIf you are using Yosemite you will get this error and the installation blocked.\n\n![A TrueCrypt error when trying to install it without the fix](/static/truecrypt-install-error.png)\n\nFor some reason TrueCrypt thinks 10.10 is less than 10.4.\n\n### Fix\n\n1. Open the `.dmg`.\n2. Copy the content inside to a different location (the `.dmg` file is read only).\n3. Once you got your files inside something like `~/Downloads/TrueCrypt/`, open the `.mpkg` package contents, then the `/Contents` folder.\n4. Open the `distribution.dis` file with a text editor.\n5. Remove lines from 13 to 18.\n\n![Snippet showing the code that you have to remove](/static/truecrypt-error.png)\n\n6. Save the file and open the `.mpkg`.\n\n_Sources_:\n\n- [truecrypt 7.1a requires Mac OS X 10.4 or later on Yosemite 10.10](http://apple.stackexchange.com/questions/173879/truecrypt-7-1a-requires-mac-os-x-10-4-or-later-on-yosemite-10-10)\n- [Install TrueCrypt On Mac OS X Yosemite 10.10](https://lazymind.me/2014/10/install-truecrypt-on-mac-osx-yosemite-10-10/)\n\n### Command line setup\n\nAfter you install TrueCrypt you still can't use it from the Terminal, you need to put this in your `~/.bashrc` or `~/.bash_profile`:\n\n```bash\nalias truecrypt='/Applications/TrueCrypt.app/Contents/MacOS/Truecrypt --text'\n```\n\nThen it will be ready to use:\n\n```bash\n$ truecrypt\n\nUsage: Truecrypt [--auto-mount ] [--backup-headers] [--background-task] [-C] [-c] [--create-keyfile] [--delete-token-keyfiles] [-d] [--display-password] [--encryption ] [--explore] [--export-token-keyfile] [--filesystem ] [-f] [--hash ] [-h] [--import-token-keyfiles] [-k ] [-l] [--list-token-keyfiles] [--load-preferences] [--mount] [-m ] [--new-keyfiles ] [--new-password ] [--non-interactive] [-p ] [--protect-hidden ] [--protection-keyfiles ] [--protection-password ] [--random-source ] [--restore-headers] [--save-preferences] [--quick] [--size ] [--slot ] [--test] [-t] [--token-lib ] [-v] [--version] [--volume-properties] [--volume-type ] [Volume path] [Mount point]\n```\n\n### Basic usage:\n\nCreate a volume:\n\n```bash\n$ truecrypt -c -t\n```\n\nMount a volume:\n\n```bash\n$ truecrypt secrets.tc /Volumes/truecrypt1\n```\n\nUnmount specific volume:\n\n```bash\n$ truecrypt -d /Volumes/truecrypt1\n```\n\nUnmount all volumes:\n\n```bash\n$ truecrypt -d\n```\n\nFull documentation and man page:\n\n- [TrueCrypt ArchWiki](https://wiki.archlinux.org/index.php/TrueCrypt)\n\nRelated links:\n\n- [GRC's | TrueCrypt, the final release, archive](https://www.grc.com/misc/truecrypt/truecrypt.htm)\n- [Easy access to TrueCrypt from the command-line in OS X](http://marc-abramowitz.com/archives/2011/10/17/easy-access-to-truecrypt-from-the-command-line-in-os-x/)\n- [How to use TrueCrypt?](http://polishlinux.org/howtos/truecrypt-howto/)\n"}]},"__N_SSG":true}