Programming

Boo? I’m an executive now, but I came into this after many years as a professional programmer. Delphi is a great language. I still use it from time to time. The newest owners of it, Embarcadero software, continue to support it and release new versions regularly. However, they’ve abandoned the hobbyist market, and the newer versions cost an arm and a leg.

6 Likes

Not necessarily a bad thing. Object Pascal is very straightforward, and the IDE is a delight to work with. Delphi is still available from https://www.embarcadero.com/, and it’s kept up to date. There is a free version without networking and database components. However, there is also an Open Source knockoff, Lazarus, that is quite compatible, and also kept up-to-date, and it has (naturally enough) no such restrictions.

Edit:

There is a 32-bit Starter Edition that is free.

5 Likes

I think ultimately that was what I ran into. Coupled with the fact that out of all the languages I dabble in, that had been to only Delphi app I’d ever felt the need to “fix”, it just wasn’t worth it to me for the project I believe I ultimately abandoned

4 Likes

Hey, computers are just collections of high/low electrical signals. It’s all the same thing, right?

About 10 years ago, I would have been right there with you.

Nowadays, since my primary job isn’t programming, my requirements are: (for small programs with lots of text processing) fast to write, fast to run, fast to debug. Which, for me, ends up being Perl (written in a C/C++ programmer’s style). And you thought you had to duck…

(There’s other languages I still keep my hand in, but when I have to parse through a ton of CSV(or similar)-formatted text and rebuild into something usable in a one-shot task, I tend to reach for the ol’ familiar…)

8 Likes

I started with Apple II and C64 BASIC, then QBASIC, like a lot of people did back in the day, then graduated to Turbo Pascal (OOP was sooo cool!), and from there to a largely forgotten language called Euphoria that was interpreted and flexibly typed, but you could inline machine code.

We had at the time a community working on stuff that in hindsight could’ve been huge, but that was before FOSS became a thing. We shared code on a mailing list and then a website, but the language was shareware. I did some stuff with an IDE, VESA VBE graphics code, and making DOS programs compatible with this new hot thing called Windows. Even made a couple of small Windows programs for playing RPGs. But by then FOSS languages like Perl and Python were out and destroying the market for shareware languages like that.

Along the way I learned some C, C++, Java, etc., all with the goal of making real Windows programs. I had no real problems with the languages but oh man the tooling needed to do anything, even fairly trivial stuff, was so excessive and complicated. (Trying to learn C++ and Win32 API and WinForms and Borland’s OWL wrappers and their IDE and build chain all at once was too much.)

When I went to actually doing it for a career instead of just fun, I ended up doing web stuff instead of desktop. I picked up PHP, SQL, Javascript, Perl, and Python, and I’ve been doing that since (though no Perl in years). I’ve done some C and Java at work, but not much. On the side, I played around with Forth, F#, Prolog, just to expose my brain to different paradigms, but haven’t done anything with them.

Things I really love the most about programming are knocking out simple scripts that automate away a lot of boring tedious repetitive work so that I (or coworkers) can focus on doing creative and productive stuff instead of copy/pasting, retyping, tediously clicking around, etc. Also when customers say that the stuff I did makes their job so much easier or more fulfilling, and when salespeople say the potential customers loved the stuff they demoed.

I like clarity, so Python, Pascal/Delphi are beautiful, but I don’t see Pascal at all in work, and Python only very rarely.

Things I dislike are how much bloat and complexity have crept in over the years. We used to aim for as clear as possible of a mapping between the problems that we were trying to solve and the data and code involved, but now there are 72 layers of indirection and abstraction between. We literally have to have tools to manage the tools that manage the tools that manage that complexity for us.

10 Likes

The thing about marginal languages is that, if they had fans at all, they never really went away, they got open-sourced instead. For instance, this might interest you, given your history. So, in the past, I was interested (from a design point of view) in languages or language frameworks like BlackBox (Component Pascal - an offshoot of Oberon with a strong OpenDoc influence) and Q’Nial, an array-manipulation language along the lines of APL, but with borrowings from other paradigms, and they have transitioned, as you can see, from proprietary to FOSS. Now, Q’Nial was never a roaring success, so, if it still lives, chances are that more popular hobbyist languages still exist as well.

5 Likes

This.

I like writing code that does something.

One of our new projects has libraries that are not allowed to know about other libraries despite there being no possible use case in which they should be separated. Every time I need to add new functionality to something accessible from the UI, I have to second-guess the project lead’s vision of where everything belongs, write the code, create new data types to represent the same data as types that already exist, and write at least two wrapper functions. In a lot of ways, I prefer our shitty, hopelessly engtangled legacy code. The new stuff is already hard to debug and maintain and it’s not going to go into production for another 2-3 years by my estimate.

10 Likes

I don’t code directly on applications that much any more. Lately, my role has been a lot more saying “Hmmm, could we make this simpler by doing X/Y/Z. I don’t think you want to be in charge of maintaining the code you’ve got planned” and “I think you should plan this with So-and-So, as their work will inherently be dependent on yours, and if you don’t talk about it now, they’ll waste a ton of effort.”

I feel like it’s so easy to get caught up in “it wasn’t invented here”, but academia does not reward development well. They reward papers and grants, so you need to make code that is maintainable, and not spend more effort than needed doing so. I’m often the only woman in the room, and since women often have to get more done to be judged equally productive, I often have a sort of unique ability to spot those areas where we’re putting in too much effort for too little pay off.

8 Likes

And right there is the very essence of engineering.

12 Likes

I had never seen this before but it’s quite amazing. I’ve painfully adapted to git’s many quirks (although I’m hardly an expert) but to step back and really look at it with an objective eye makes one think, “what the fuck were they thinking?”

http://stevelosh.com/blog/2013/04/git-koans/

5 Likes

On further reflection, this is often me:

And this is a typical commit history for me:

Once again proving there’s an xkcd for basically everything.

9 Likes

“Take CVS as an example of what not to do; if in doubt, make the exact opposite decision.”

5 Likes

The History and reason for the creation of git is fascinating to me:

5 Likes

same.

ive developed software for 20+ years, plus and minus hobbing, +/- working (still occasionally) in games.

sometimes it’s like i imagine it’s like a fish must be: swimming in water. sometimes i see it’s the least (games especially) thing i could be doing.

i wish everyday that i could be making my own useless bullshit. my choices seem to remain: make useless ( entertainment ) for somebody else, try your own thing in replication of all that, or dither.

i sincerely wish it were easier to dither in this u.s.a. i live in.

we could all make such brilliant useless shit if it were easier to put food on the table.

9 Likes

What I find funny is that after having used git a bit outside of work, we all really wanted to change to git because it would make branching and merging easier than SVN. But now there are so many possible combinations of pulls, fetches, fast forwards, merge commits, etc. with various command options, that I can’t help thinking SVN made this so much easier. :laughing:

9 Likes

Probably going to submit an R package to a standardized package repository in the next couple days, when I have continuous integration working. I am really, really nervous, even though all my checks and tests pass.

6 Likes
9 Likes

are any of the books in this humble bundle worth reading?


(Yes, I still use Java.)

Looks like a mix of books covering diferent versions of Java, so already I’m skeptical

4 Likes

I’ve done some ASCII art code diagrams in my day, but these are just spectacular.

https://blog.regehr.org/archives/1653

3 Likes

The project I’m working on began over a decade ago. Number of customers and quantity of data were relatively small, and future directions were relatively unknown back then. So the data structure was made very flexible. A lot of work was done in code rather than the database, to keep things flexible. The reporting bits worked well enough with small datasets.

Fast-forward, we now have well over a hundred customers, some of which have massive amounts of data. The reports don’t work well anymore. We have loops within loops, each doing queries, multiplying out to thousands of queries per page load for some customers. And the queries themselves are doing self-joins and other very slow stuff to handle all the flexible dynamic data structures.

So I’ve spent much of the last week looking at a report that appears like it should be a single aggregate query and trying to factor out all the layers of cartesian products in both the code (loops within loops) and the database (millions * thousands joins). A couple days ago I hit a point where I decided I’d gotten as far as I could go, any further would be madness. Got at least one order of magnitude improvement on the datasets I was looking at.

But then yesterday tested against a larger set of customer data and while it looked good on most, it would’ve completely broke on one site (due to the whole data flexibility thing) and on a couple of others it had no noticeable effect. :persevere:

So today I tried another approach. I created new ‘helper’ tables with direct relations (consolidating the indirect ones) and managed to rewrite the thing as a total of only 2 queries. (One needed to be separate because for some reason the data that it feeds into the report isn’t actually supposed to line up with the rest.) I finally got my aggregates to work and there’s not a single loop in the code or cartesian product in the queries! :crossed_fingers:

Now the question is whether those helper tables should be temp tables generated at the start of the report with the then-current data (adds some time to every hit to the report, but still much faster than before), or instantiated and ready for it. If instantiated, that would mean making sure everything in the entire codebase always updates them whenever something changes. Instantiated would of course be better, but the risk of missing some possible way that the data could change in that big a codebase makes me lean toward temp tables. It’s not something we use anywhere else, but my previous experiences with views and instantiated denormalized tables (and all the bugfixes we’ve had to do afterward) makes me think it’s the safer route.

Any thoughts on temp tables vs instantiated relation tables (to short-circuit hierarchies and indirect relations and enable direct joins) in a case like that? (Assume MySQL views are out because we’ve had too much trouble with them before.)

7 Likes