• For emacs lovers: I extended @dgold’s code to post to micro.blog from emacs. Now you can include images in the markdown buffer to be posted to micro.blog, and they will be uploaded and linked properly in your published post!

And here’s the code (or, alternatively, see gist):

(require 'request)

(setq mb-emacs-app-token "123456789") ;; create this in account -> app tokens -> edit apps
(setq mb-micropub-endpoint "https://micro.blog/micropub")

(defun mb-get-media-endpoint ()
(cdr (assoc "media-endpoint"
(let (result)
(request
mb-micropub-endpoint
:params '(("q" . "config"))
:type "GET"
:sync t
:timeout: 10
:headers  (("Content-Type" . "application/json")
("Authorization".,(format "Bearer %s" mb-emacs-app-token)))
:complete (cl-function
(lambda (&key data &allow-other-keys)
(setq result data))))
(if result
result
(error "Can't get media endpoint")))
#'string=)))

"Post inline images."
(cdr (assoc "url"
(let ((result))
(request
(concat media-endpoint
:type "POST"
:files (("file" . ,img-path))
:headers  (("Content-Type" . "multipart/form-data")
("Authorization".,(format "Bearer %s" mb-emacs-app-token)))
:sync t
:success (cl-function
(lambda (&key data &allow-other-keys)
(setq result data))))
(if result
result
#'string=)))

(interactive)
(save-excursion
(save-restriction
(let ((media-endpoint (mb-get-media-endpoint)))
(widen)
(goto-char (point-min))
(let ((start (match-beginning 0))
(imagep (match-beginning 1))
(end (match-end 0))
(file (match-string-no-properties 6)))
(when (and imagep
(not (zerop (length file))))
(when (file-exists-p file)
(let* ((abspath (if (file-name-absolute-p file)
file
(concat default-directory file)))
(replace-match img-upload-url t t nil 6))))))))))

(defun mb-post-buffer ()
"Post current buffer to micro.blog (possibly as draft)."
(interactive)
(if (yes-or-no-p "Are you sure you want to post this?")
(save-restriction
(widen)
(let ((buffer-contents (buffer-substring-no-properties (point-min) (point-max)))
(mb-post-name (read-string "Enter post name (leave empty if none):"))
(mb-post-status (post-status . [,(if (yes-or-no-p "Post as draft?") "draft" "published")])))
;; copy content of current buffer to new buffer,
(with-current-buffer (generate-new-buffer "post2mb")
(goto-char (point-min))
(insert buffer-contents)
(goto-char (point-min))
(request
(concat mb-micropub-endpoint
:type "POST"
:data (json-encode ((type . ["h-entry"])
(properties
(content . [,(buffer-substring-no-properties (point-min) (point-max))])
(name . [,mb-post-name]) ,mb-post-status)))
("Authorization".,(format "Bearer %s" mb-emacs-app-token)))
:success (cl-function
(lambda (&key data &allow-other-keys)
(message "Success.")))))))))


• ## Stay away from AutoML (...if you can't do ML)

Image taken from this post on reddit.

These are thoughts in response to an article on dataiku about empowering more people in an organization to use ML.

Often there’s just not enough data scientists to support the needs of an entire organization. The typical recommendations - and trend - to alleviate this problem is to put effort in setting up self-service analytics, and automatic model creation with AutoML.

While I do think self-service analytics have a big role to play, I am not at all convinced about AutoML. What’s that?

To quote from my third Google result for “AutoML”:

AutoML provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning. Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform manual tasks. As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML.

That is, people with no data science expertise should be able to roll out their own automatically generated Machine Learning model, to support decisions in what the dataiku article would categorize as “simpler projects”.

## Why AutoML is not the solution

I am extremely skeptical of using AutoML when you don’t have expertise in Machine Learning. With the right data, and the right problem? Sure, you might get a decent model out. The issue is that you need training to be able to tell what the right data is, what a decent model is, and even how to define the problem correctly in the first place!

Otherwise you better expect garbage.

It is so very easy to fool yourself into a data-backed story that crumbles down as soon as you start poking it, and you need a lot of training to avoid fooling yourself (as an aside, this seems to be a general rule for life). With this in mind, when you are not trained to reason on data or ML, AutoML is just a recipe for disaster.

It doesn’t really matter whether the idea is to only apply it to “simpler projects”. Decision based on garbage are still based on garbage, independent from project complexity.

## Train your people, make them effective

Don’t get me wrong, empowering more people to play with data in an organization is a brilliant idea. But you have to train your people first!

You can take away the software engineering pains of data science, and even a lot of Machine Learning boilerplate processes; In that, tools are extremely useful. But you can’t take away the data science. You need people to be able to reason about problems, data and models. No amount of AutoML is going to help if you don’t have that.

If you have to invest in something to make your organization data-driven, invest in your people. Invest in training. Don’t try to substitute tools for expertise.

• ## Linux embedded development stays a second-class citizen

Arduino debugging can be a pain, as you need to get a (quite expensive) specialized piece of hardware, that you then need to hook in the right way to the board in order to get going.

Now, Microsoft has introduced a very nifty feature in the Arduino extension for VSCode, which allows you to debug some Arduino boards without any additional hardware. That is, just plug the board to the USB port, put some breaking points, and step through your code as if it’s nobody’s business. That’s great news if you happen to be using one of the supported boards, which I am. No more print statements, sounds like heaven!

So I set the environment up in VSCode, followed the steps, started the debugging session and… Error. Crap. I must have misconfigured something. Tha’s fine, let’s google it.

And, lo and behold, nope. No misconfiguration. The feature is broken in Linux; actually, it seems that any debugging on Arduino boards (with or without segger) just won’t work because of a small VSCode bug, which has been reported back in 2017.

I wish this was a fluke. Unfortunately embedded development on Linux keeps lagging behind, and I keep having to switch back and forth between OS’s depending on which specific part of development I am at.