For some ideas of what to do, this post by Teri Kanefield has a list of concrete actions that you can take: https://terikanefield.com/things-to-do/
Very much appreciated.
For some ideas of what to do, this post by Teri Kanefield has a list of concrete actions that you can take: https://terikanefield.com/things-to-do/
Very much appreciated.
Also, what do you mean, OP, by “do you have perfect recall or an average human byte”? Are you thinking of information in terms of bits and that people can only keep a limited amount of things in working memory at a time?
I think that would be a great situation to be in.
You have created a cool thing a lot of people use, by being good at something. You’ve done something.
Also, people have no idea who you are. Nobody is digging through your trash, harassing the people you love, taking pictures of you wherever you go including on your bad hair days, etc. You’re just some guy.
deleted by creator
This feels like me wanting to learn Hare because I like rabbits, which I bring up because someone left this reply for me and I think it applies to you too:
That is such a sweet reason! Whimsical decisions like this can be some of the best. Life demands a bit of whimsy every now and then.
Hey, thanks for the suggestion! I was considering firing up a VM just for Hare, but thanks for bringing this option to my attention.
I was going to learn !hare@programming.dev just because it is called “Hare” and I like rabbits, but then I saw that I am not on a supported OS.
Reddit post’s content:
Hello all,
My team and I have been working on this platform for a while now and we wanted to share it for those who might be interested.
The goal of GIGO Dev is to offer a learning platform that addresses all the challenges we encountered when we first learned to code.
The repo consists of all parts of the platform from the lib models to the frontend code. We wanted to open source our platform for people to be able to see how it works, provide feedback on what we can do differently, or even contribute!
We continue to work on it everyday and strive to always make it better.
Here is the link to the repo: https://github.com/Gage-Technologies/gigo.dev and here is the link to the actual platform: https://www.gigo.dev/
I also don’t generally use Python as my primary language, but NumPy has pretty good docs in my opinion!
How do I get better at understanding API docs without a tutorial to walk me through the basics of how the library works in the first place? Once I have an idea of some of what the library does and how a few commonly-used functions work I can somewhat handle the rest, but getting to that point in the first place is pretty hard for me if no getting started or tutorial section exists. And so I’m very intimidated by a lot of libraries…
I figured it would look more professional, and it would also let me separate the contributions I made with my more anonymous GitHub out—not too sure how closely they investigate your previous contributions and how good your code was.
I am guessing this was not a good choice, and I should have just continued using my more anonymous GitHub, or made the account as JSmith instead of JaneSmith.
Your comment made me curious, so I looked around the website and found this.
Our dataset documents Texas death row inmates executed from 1976, when the Supreme Court reinstated the death penalty, to the present.
On one level, the data is simply a part of a mundane programming book. On another, each row represents immense suffering, lives lost, and in some cases amazing redemption and acceptance. In preparing for this dataset, I was deeply moved by a number of the statements and found myself re-evaluting my position on capital punishment. I hope that as we examine the data, you too will contemplate the deeper issues at play.
Just a warning for folks who might not be in a good mental spot for seeing this in their SQL tutorial right now, or even just if it wouldn’t be to your personal tastes. It’s not your average school exercise but with morbid flavoring, the site really integrates its data. It provides a lot more information about capital punishment than you strictly need to solve the database problems. That works nicely with their intention of “Exercises should be realistic and substantial”.
Likewise, the exercises here have been designed to introduce increasingly sophisticated SQL techniques while exploring the dataset in ways that people would actually be interested in.
Article text:
Taking baby steps helps us go faster.
Much has been written about this topic, but it comes up so often in pairing that I feel it’s worth repeating.
I’ll illustrate why with an example from a different domain: recording music. As an amateur guitar player, I attempt to make recorded music. Typically, what I do is throw together a skeleton for a song — the basic structure, the chord progressions, melody, and so on — using a single sequenced instrument, like a nice synth patch. That might take me an afternoon for a 5-minute piece of music.
Then I start working out guitar parts — if it’s going to be that style of arrangement — and begin recording them (musos usually call this “tracking”.)
Take a fiddly guitar solo, for example; a 16-bar solo might last 30 seconds at ~120 beats per minute. Easy, you might think to record it in one take. Well, not so much. I’m trying to get the best take possible because it’s metal and standards are high.
I might record the whole solo as one take, but it will take me several takes to get one I’m happy with. And even then, I might really like the performance on take #3 in the first 4 bars, and really like the last 4 bars of take #6, and be happy with the middle 8 from take #1. I can edit them together, it’s a doddle these days, to make one “super take” that’s a keeper.
Every take costs time: at least 30 seconds if I let my audio workstation software loop over those 16 bars writing a new take each time.
To get the takes I’m happy with, it cost me 6 x 30 seconds (3 minutes).
Now, imagine I recorded those takes in 4-bar sections. Each take would last 7.5 seconds. To get the first 4 bars so I’m happy with them, I would need 3 x 7.5 seconds (22.5 seconds). To get the last 4 bars, 6 x 7.5 seconds (45 seconds), and to get the middle 8, just 15 seconds.
So, recording it in 4 bar sections would cost me 1m 22.5 seconds.
Of course, there would be a bit of an overhead to doing smaller takes, but what I tend to find is that — overall — I get the performances I want sooner if I bite off smaller chunks.
A performance purist, of course, would insist that I record the whole thing in one take for every guitar part. And that’s essentially what playing live is. But playing live comes with its own overhead: rehearsal time. When I’m recording takes of guitar parts, I’m essentially also rehearsing them.
The line between rehearsal and performance has been blurred by modern digital recording technology. Having a multitrack studio in my home that I can spend as much time recording in as I want means that I don’t need to be rehearsed to within an inch of my life like we had to be back in the old days when studio time cost real money.
Indeed, the lines between composing, rehearsing, performing, and recording have been completely blurred. And this is much the same as in programming today.
Remember when compilers took ages? Some of us will even remember when compilers ran on big central computers, and you might have to wait 15–30 minutes to find out if your code was syntactically correct (let alone if it worked.)
Those bad old days go some way to explaining the need for much up-front effort in “getting it right”, and fuelled the artificial divide between “designing” and “coding” and “testing” that sadly persists in dev culture today.
The reality now is that I don’t have to go to some computer lab somewhere to book time on a central mainframe, any more than I have to go to a recording studio to book time with their sound engineer. I have unfettered access to the tools, and it costs me very little. So I can experiment. And that’s what programming (and recording music) essentially is, when all’s said and done: an experiment.
Everything we do is an experiment. And experiments can go wrong, so we may have to run them again. And again. And again. Until we get a result we’re happy with.
So biting off small chunks is vital if we’re to make an experimental approach — an iterative approach — work. Because bigger chunks mean longer cycles and longer cycles mean we either have to settle for less — okay, the first four bars aren’t that great, but it’s the least bad take of the 6 we had time for — or we have to spend more time to get enough iterations (movie directors call it “coverage”) to better ensure that we end up with enough of the good stuff.
This is why live performances generally don’t sound as polished as studio performances, and why software built in big chunks tends to take longer and/or not be as good.
In guitar, the more complex and challenging the music, the smaller the steps we should take. I could probably record a blues-rock number in much bigger takes because there’s less to get wrong. Likewise in software, the more there is that can go wrong, the better it is to take baby steps.
It’s a basic probability, really. Guessing a 4-digit number is an order of magnitude easier if we guess one digit at a time.
Thanks, donated this way!