Why Wiki's Suck

Wikis suck because they (as of yet) are never worth the additional complexity over regular web pages. They have a special syntax that can’t quite do exactly what web pages and links can do. They make some things easier, but others harder, the primary example being linking to a page within the wiki and a page “external” to the wiki.

Another major problem is that they are almost always focus on using templating languages rather than complete programming languages. This makes working with data sources (json, xml, recent statistics) a pain because the document is dead as soon as it is written. There is no way to express template generation within the language itself.

In order to overcome this wikis would need to be much more of an ecosystem.

Ability to create interactive visualizations that link to historic as well as current data.

At the bottom it should all be compiled into “regular” web pages so no special software would be necessary for hosting or interoperating with other static pages.

Rather than collecting words on a page, as in an encyclopedia, people using this kind of tool would collect data transformations, references, aggregations, and higher order functions related to simplifying these things.

Daniel X Moore Talks about HyperDev on The New Stack @ Scale Podcast

Daniel X just spoke on a podcast about HyperDev, organizational structure, and agility, check it out!

The New Stack @ Scale Podcast

“It’s easy to think that developer tools to make life easier for developers. It’s actually a lot broader than that. What we found when developing HyperDev is, as the barrier gets lower and lower, more people in the organization, people you might not traditionally think of as developers, are able to contribute, are able to build applications, are able to solve their own problems.” – Daniel X Moore

MIDIChlorian – Accurate Web Based MIDI Payer

Have you ever wanted to play a MIDI file on a computer but then found out that there’s no simple way to do it? I know I have! It’s criminal that we don’t have access to the magic of MIDI out of the box on these supposedly super advanced 2016 computers from the future… well no more! I’m going to let you in on a little secret, dear reader, I’ve been toiling away making the world’s best online MIDI player. You can simply drag MIDIs from your computer and drop them into MIDIChlorian (that’s a Star Wars joke for you) and they will play in a beautiful 4MB SoundFont, each track playing its own beautiful notes, timing to the microsecond. Amazing! Well, what are you waiting for? Go there and remember a simpler time, before mp3s, when anybody could remix music and share among friends. Go enjoy MIDIChlorian.

chlorian

GitHub Pages Custom Domain with SSL/TLS

The Overview

Route53 -> CloudFront -> github.io

You’ll get the joys of having SSL/TLS on a custom domain https://danielx.net backed by the ease of deployment and reliability of GitHub Pages.

The Price

  • Route 53 ($0.50)
  • CloudFront (pennies!)
  • SSL/TLS Cert (free!)

The Details

Get the certificate for your domain at https://aws.amazon.com/certificate-manager/. Be sure your contact details on the domain are up to date because Amazon uses whois info to find out where to send the confirmation email. I like to request a certificate for the wildcard as well as the base domain, i.e. *.danielx.net and danielx.net, that way I can use the same certificate if I want to have other CloudFront distributions for subdomains.

Screenshot from 2016-02-09 15:45:08

You’ll need to click through the links Amazon emails you so that they can validate your ownership of the domain and activate the certificate.

Next, create your CloudFront distribution. Choose “Web”. Configure your origin, in my case strd6.github.io. Choose “HTTPS Only” for Origin protocol policy, that way CloudFront will only connect to you GitHub pages over HTTPS.

Screenshot from 2016-02-09 15:55:14

Configure the caching behavior. Here I add OPTIONS to the allowed requests, I’m not sure if this is necessary since GitHub pages enables CORS by adding the Access-Control-Allow-Origin: * header to all responses. You also may want to customize and set the default TTL to zero. GitHub sets a 10 minute caching header on all resources found, but won’t set a header on 404s. This will prevent CloudFront from caching a 404 response for 24 hours (yikes!)

Screenshot from 2016-02-09 16:03:20

Here’s where we add our certificate. Be sure to set up the CNAME field with your domain, and be sure your certificate matches!

You’ll also want to set the Default Root Object to index.html.

Screenshot from 2016-02-09 16:13:28

You can also add logging if you’re feeling into it.

If your domain is hosted somewhere else you can transfer your DNS to Route53, otherwise you can set up the DNS records on your domain provider.

Create a Route53 Record set for your domain then create an A record. Choose Alias, and select the CloudFront Distribution as your Alias target. Note: you may need to wait ~10-15 minutes for the distribution to juice up.

Screenshot from 2016-02-09 16:17:53

Caveats

You need to be careful with your urls (you’re careful with them anyway, right?!). You must include the trailing slash like https://danielx.net/editor/, because if you don’t and do https://danielx.net/editor GitHub will respond with a 301 Redirect to your .github.io domain, and it won’t even keep the https!

If you hit a 404 CloudFront may cache the response for up to 24 hours with its default settings. This is because GitHub doesn’t set and caching headers on 404 responses and CloudFront does its default thing.

Hamlet Implementation

There are many existing frameworks and libraries in JavaScript that handle data-binding and application abstractions but none of them offer an integrated solution that works with higher level languages (CoffeeScript, Haml). You could come close with CoffeeScript + hamlc + Knockout or something similar but it would always be a bit of a kludge because they were never designed to work together.

There are three major issues that should be solved in clientside JavaScript applications:

  1. Improved Language (CoffeeScript, or Dart, TypeScript, etc.)
  2. HTML Domain Specific Language (Haml, Jade, Slim, others)
  3. Data-Binding (React, Knockout, others)

Hamlet is novel in that it provides a clean combined solution to these three issues. By building off of CoffeeScript in the compiler we get to have the same improved language inside and outside of our templates. haml-coffee provides a CoffeeScript aware HTML DSL, Knockout.js provides data-binding aware HTML, but no tool provided them together. What if we could truly have the best of both worlds?

%button(click=@say) Hello
%input(value=@value)
say: ->
  alert @value()
value: Observable "Hamlet"

This simple example demonstrates the power and simplicity of Hamlet. The value in the input field and the model stay in sync thanks to the Observable function. The template runtime is aware that some values may be observable, and when it finds one it sets up the bindings for you.

All of this fits within our < 4k runtime. The way we are able to achieve this is by having a compile step. Programmers accustomed to languages like Haml, Sass, and CoffeeScript (or insert your favorites here) are comfortable with a build step. Even plain JS/HTML developers use a build step for linters, testing, and minification. So granted that most web developers today are using a build step, why not make it do as much of the dirty work as we can?

The Hamlet compiler works together with the Hamlet runtime so that your data-bindings stay up to date automatically. By leveraging the power of the document object itself we can create elements and directly attach the events that are needed to observe changes in our data model. For input elements we can observe changes that would update the model. This is all possible because our compiler generates a simple linear list of instructions such as:

create a node
bind an id attribute to model.id
add a child node
...

As the runtime executes instructions it has the data that should be bound. Because the runtime is “Observable aware” it will automatically attach listeners as needed to keep the attribute or value in sync with the model.

Let’s follow the journey of our humble template.

             parser    compiler   browser+runtime
              |          |              |
haml template -> JSON IR -> JS function -> Interactive DOM Elements

The template starts out as a text string. This gets fed into the hamlet-cli which converts it into a JS function. When the JS function is invoked with a model, in the context of the browser and the Hamlet runtime, it produces a Node or DocumentFragment containing interactive data-bound HTML elements. This result may then be inserted into the DOM.

The parser is generated from jison. We use a simple lexer and a fairly readable DSL for the grammar.

There’s no strong reason to choose Haml over Slim or Jade, I just started with it because it was a syntax I knew well. The name Hamlet comes from “Little Haml” as it is a simplified subset of Haml. Adding support for a Jade or Slim style is as easy as creating a new lexer that with the appropriate subset of Jade or Slim.

Some of the simplifications to the language come from the power of the runtime to build DOM elements directly. We don’t need to worry about escaping because we’re building DOM elements and not strings. We can also avoid the DOCTYPE stuff and other server specific requirements that are not relevant to a clientside environment. Other reductions were chosen solely to make the language simpler, which has value in itself.

The core goal of Hamlet is to provide an HTML domain specific language that seamlessly inter-operates with CoffeeScript and provides bi-directional data binding. Each piece works together to provide an amazing overall experience. But you don’t have to take my word for it, try it out for yourself with our interactive demos.

Array#minimum and Array#maximum

Time for the next installment in 256 JS Game Extensions. It’s been a while hasn’t it? Well don’t worry because here are four new crazy cool additions to the Array class. This brings us up to 40!

Array::maxima = (valueFunction=Function.identity) ->
  @inject([-Infinity, []], (memo, item) ->
    value = valueFunction(item)
    [maxValue, maxItems] = memo

    if value > maxValue
      [value, [item]]
    else if value is maxValue
      [value, maxItems.concat(item)]
    else
      memo
  ).last()

Array::maximum = (valueFunction) ->
  @maxima(valueFunction).first()

Array::minima = (valueFunction=Function.identity) ->
  inverseFn = (x) ->
    -valueFunction(x)

  @maxima(inverseFn)

Array::minimum = (valueFunction) ->
  @minima(valueFunction).first()

Array#maxima is the core of this set, all the other methods are implemented based upon it. maxima returns a list of the elements that have the maximum value for a given value function. The default value function is the identity function which returns the item itself. This will work great for integers or strings: anything that correctly works with the > operator.

The value function can be overridden for example if you want to compute the maximum length word in a list you could pass in (word) -> word.length

The special case maximum delegates to maxima and returns only the first result. Similarly minima delegates to maxima but inverts the value function.

With these methods many problems that seem complex actually become quite a lot simpler by picking a value function and whether you want to maximize it or minimize it.