Uncategorized

From Java to Objective C

I remember in 2001 when I started to learn Java how awesome it was compared to C / C++.  No more malloc, no more free, no more pointers, lots of easy to use Collections.  Plus built in GUI tools e.g. AWT (remember that!) and Applets (*shudder*). Well lately it’s been a blast learning Objective-C, which is C but objectified :-) For those Java programmers out there that want to ramp up quickly here’s a quick translation guide – it doesn’t explain C pointers etc but you should get a good idea for the significant syntactical/stylistic differences between the two languages. https://docs.google.com/document/d/1iRv-8qQxPlMVKLgHPGbkgK4QeruHJ7tqfkEYJxXmd2o/edit?usp=sharing I am sure I have more to add Please let me know if you find any problems or misstatements

Standard
Uncategorized

Creating a Free Computer Science Degree

Everyone who wants to go to college and can afford to should do so. But what about those who want to, but simply cannot afford to? Yes college pays off over the long term (even at todays exorbitant rates in the USA). But for some families they just see the cost side of the equation.  The decision, once clear cut, is becoming a bit more nuanced. For example read

150 of 3500 US Colleges worth the investment

Is College worth it?

MOOCs (Massively Open Online Courses) are a very interesting recent phenomenon – the idea that you could put college-level courses online and make them free is truly amazing.  The high drop-out and poor attendance rates for MOOCs show something important is missing from your average MOOC.

And I think the thing that is missing is the “social” aspect of schooling.  When you go to college you make a public commitment to education – at least to your friends and family – that says “I am going to do this”.  Then you get to class, make friends and you have an incentive to stay.  You see your friends get on, maybe see them and others succeed and have higher scores and now you have a competition oriented incentive that takes hold.

I think the “Social” incentives and support, if they could be captured and ignited, would be a fascinating enabler for MOOCs.

Last year I also saw this article “$200K for a computer science degree? Or these free online classes?” that provided a listing of a number of college level courses from the likes of Stanford, MIT and Princeton that could form the basis of a Computer Science education.

And so my creative side (which doesn’t get out much) got to thinking. What would it take to have a space, with WiFi, and a collaborative work environment in which people could take these courses with mentorship and guidance from professional software engineers and perhaps some college educators looking to “give back”.  Not much I think.

If you pair with local software companies (desperate for software engineers of all stripes) you could put together a challenging but realistic education to help propel these kids to a good future.

A key advantage of this “free education” would be the freedom to sculpt a syllabus that is more personalized that the traditional CS degree. Some folks I know from my CS class and my later 20 years of experience are suited to different tracks. Not due to inherent ability or intelligence but a result of what they are inherently interested in.  Look some folks just want to get a better job. Others are more committed to a large up-front educational investment (of time).

In addition, we need to recognize that attending college from 18-22 and then retiring at 65 without further educational investment in between is a 20th century concept whose usefulness has passed. We need an educational model that looks more like a “continuous improvement” model – maybe a few courses to bootstrap, then a course or two every year, year-after-year.  More like Scrum – less Waterfall.

When I graduated college the key skills were: Unix, C, RDBMSes and CORBA and we were hot about Neural Networks and 64 kbps ISDN lines

Today: We have Internet technologies, Objective C, Java, C# – a plethora of NoSQL technologies – the cloud, mobile etc etc.

Pretty much I’d say your technology skills need a near complete revamping every 3-5 years. True, the principles don’t change that much, but the tools, technologies in use and the problems being solved definitely do.

A Computer Science education has milestones but is NEVER done!

So what syllabus would I pick? Well I think the InfoWorld article is a good start but I would add

Hardware Classes

Computer Architecture  @ Princeton

Computer Architecture @ Saylor.org

The basics of microprocessors is critical for this being a true CS education. Being hands-on is a challenge but maybe there are opportunities with local “maker”  communities.

And that’s probably a minimum. A course in electronics and digital logic design would be good in addition.

Math & Statistics

I can hear the groaning, but you won’t get far in technology without understanding basic stats (averages, standard deviation, medians, probability distributions etc.  But here are some good places to start.

Introduction to Statistics @ Berkeley (EdX)

Introduction to Statistics @ Stanford (Coursera)

Advanced Level: Computing for Data Analysis @ Johns Hopkins (Coursera)

Everyone should have statistics – but for the feel of “true” CS degree you’ll want a bunch of work on calculus, discrete math, geometry – they are critical in advanced areas of image processing, crytography, computer graphics etc.

Internship

Software development and computer programming are a craft and I think a healthy smattering of hands-on practical exposure in a business environment is critical. It will need to be done for a large chunk of time (3-6 months at a time) and will help to ground the student and focus them on how to hone their craft.

In addition project work with peers is another great way to get this much needed practice time.

Mobile & Internet Technologies

In any CS degree – basic principles and math is critical as well as the need for core systems knowledge. But the kids should get a flavor of the “cool” technologies too.

Mobile: Objective-C / iOS   or Java/Android

Web Development: HTML/CSS, Javascript and perhaps some Ruby/Rails (to show people how a dynamic language can make you more efficient)

Not to mention UX design.

So much to learn – but it doesn’t have to be all at once. Get enough to get a solid SWE / Web Developer job and then continue to learn – one course at a time, perhaps one each semester – that should be enough.

The Challenge

Could we take MOOCs and pair them with local SWEs and college educators and provide a solid CS education at a very low price?  I think the answer is Yes. You don’t need the massive classrooms. The stadiums. The dorm rooms. The cafeterias. You can probably figure out something around text books too.  Yes you need a space. Yes you need WiFi. You can “employ” some of your better students as mentors too and have them give back before they leave.

I choose the analogy of the “Model T” where Ford created a car affordable to the middle classes (all while paying his workers well above average) and he helped created a revolution in manufacturing.

In the end, it is the result, that proves the model – if you can get these kids hired into good Web Developer and Software Engineer jobs at good companies with near-typical salaries / benefits what more would you need?

 

 

 

 

 

 

Standard
AWS, software architecture

AWS Migration Patterns

A lot of enterprises are getting great value out of migrating their applications to the Cloud. 
In particular AWS is the “Big Dog” in this space – although Microsoft, Google and IBM are out there I personally believe they are far behind (by approx 2-3 years in terms of pure depth and breadth of functionality) and that shows up in the revenue too
 
AWS customers are saving money mostly because you can rent what you need vs. buy for the peak traffic. They are also gaining not only increased scalability (e.g. auto-scaling) but increased flexibility (e.g. EC2 sizing options, deployment regions), reduced response times and reduced development times (from pre-built, pre-configured components like SQS, SNS, SES etc.).
 
However anyone who has spent any time in Software or IT shivers at the word “migration” and add to that the relative unknown of migrating to a data center and IT staff you rent and don’t own / control – that can be fear inducing for the uninitiated.  
 
Having recently completed a “Big Bang” migration to AWS I learned a lot of lessons that I would like to share and have also learned a lot about potential approaches to migration. This article describes a few approaches.
 
First off – What are we migrating?
For the purposes of this article I am going to assume you have a typical three tier Web/Mobile application – Web/Client Tier, Application Tier and Data Tier. For Java that could be Struts/JSP web tier & EJB in a J2EE container (Web & Application tiers) or alternatively a mobile app with REST services.  Either one backed by a traditional RDBMS (e.g. Oracle / MySQL).  If you are thinking of going the NoSQL route at the same time I would suggest you do the AWS migration first, because once you go to AWS your setup, profiling and tuning of your NoSQL implementation may need to be done again.
 
A typical “in house” 3-tier web architecture

 

 
STEP 1: Learn, learn and learn
Your first step is always to learn as much about AWS as possible and learn about the architectural and design options available.  You have EC2 (your virtual machine instances), SQS (for Queueing), SNS (for Notifications), ELB (Elastic Load balancer for load balancing), Route 53 (for DNS) and RDS (Relational data services for your RDBMS), SES (Simple Email Service). There are plenty more services out there but for the sake of this (relatively) basic 3-tier architecture those key services will get you a long ways. Giving your tech staff an intensive training course (delivered by Amazon) on site is a great idea to get everyone to the same knowledge level FAST!
 
STEP 2: Pick and choose your services
Second step: your architecture and security teams are needed next to make sure your target architecture handles Disaster Recovery / High Availability and Security.  It’s all pretty much the same rules as you had before just wrapped a bit differently (e.g. failover detection in your load balancer, firewall rules of what ports are allowed or not) but also you’ve got the added “features” and flexibility (complexity!) of Availability Zones and Regions.  Realize that AWS is always in motion – adding features, tweaking etc. so you’ll need to be comfortable a bit with learn as you go.  On the security side, I don’t think anything beats having a “Red Team” whose goal in life is to hack your system.  So start simple before the migration and add new cool services later post-switchover. Don’t try to migrate AND add a lot of new services at the same time.
 
Finally once you feel you know enough about AWS and have a target architecture comes the fun part. Migrating all the individual pieces.  There’s really two basic ways to do it – all at once or bit-by-bit.
 
PATTERN #1: “Big Bang” switchover.
 
One way to do a migration is as follows – the “Big Bang”
 
Basically you
1) Migrate your Domain e.g. http://www.mycompany.com DNS records to be managed by Route53 – still having the records point to your old data center
2) Build out and test your new target architecture in parallel. Migrating all data as required. Test some more! And Test some more!
3) On some day at some hour cut over the http://www.mycompany.com DNS records to point to your AWS load balancer. Smoke Test, have some users test. And either it’s good (and you’re OK!) or it’s not (and you fail back).
 
 
Parallel Architectures before the “Big Bang” switchover
 
The Pros are this is relatively simple sequencing for management and for developers – very waterfall and it’s relatively simple to test but on the downside it’s risky – as all “big bangs” are.
 
The risks and problem become clear if you have a 24x7x365 mission critical system – especially one where you can’t just tell your users to log off at a certain time.  Similarly a high-visibility or large revenue generating system (even if it’s not 24×7) might not be a candidate for a big bang approach – since you may find some major migration issues hours, days or weeks after switchover without the ability to switch back (easily)
 
So what’s the alternative to “Big Bang” – well clearly it’s piecemeal. You could migrate some small components of your architecture one-by-one. Or you could operate two parallel architectures in parallel with some data synchronization
 
I don’t recommend the latter – in my experience Data synchronization systems are some of the hardest to get right all the time – especially when networks are so flaky.
 
So what does that leave us?
 
PATTERN #2: Do it in steps via a Hybrid architecture
 
1) Migrate small well-defined components first. A good example of this is, if you are using JMS, switch to SQS. If you send emails switch to SES. That means all of the rest of your application (Web Tier, App Tier, Data Tier) remains for the moment as is – but you are calling these services remotely.   This is a good first foray – it gets your Ops and Security team used to IAM roles and you will learn things about Regions and Availability Zones without going all in.  Even these small changes might require some architecture rethink as, even within AWS data centers, calls to SQS are NOT fast like say ActiveMQ is locally (mostly because SQS is backed by three persistent copies of the data being stored in S3).
 
This is a nice piece of work where you’ll learn a lot without mortgaging the farm.
In addition you’ll be learning more about pricing, tiers and your REAL billing (which can sometimes come as a surprise!).
 
2)  From there you have to get some of the rest of your Architecture over.  
One good option exists if you have some API calls for which you don’t have a tight SLA (e.g. you don’t mind slipping from a 50 ms response time to 1 second) or you don’t mind the data being a bit stale (say by minutes or hours). In that case you might want the following:
 
i) Route 53 migration of your DNS records as before
ii) Set up either MySQL replication from your data center to a slave on AWS or perhaps a basic nightly dump-and-load from your Production Oracle
ii) Using Route 53 to route some large percentage of the requests to your “Real” system (back in your old data center). This is done via Weighted Round Robin in Route 53.
iii) Route the remainder of your requests to AWS.  If the request is read-only – hit the local (read-only) RDS instance. Otherwise proxy it back to your old data center. You could do a direct proxying back or you could set up an SQS queue to do the writes asynchronously to help avoid a very expensive (remote) write hit.
 
Hybrid architecture to lower switchover risk
 
Here again you will have taken your AWS understanding to the next level. If you don’t like what you see in Production on Day 1 or Day 20 you can change the Route 53 to set the weight of AWS to zero.  But if it works you will have learned a lot about 
- IAM roles
- EC2 and Security
- Deploying your app onto EC2 (using Chef, Puppet etc.)
- RDS & data migration
- ELB to load balance to EC2 instances locally
- Response time variability and related issues.
- Route 53 etc.
 
Naturally your Security and DBA folks will want close involvement to make sure your data replication is secure and is not opening up any unnecessary holes to the outside world. Your architecture folks will need to keep an eye (with Ops) on latencies and monitoring.
 
The nice thing about being here is you will have done a LOT of your learning and change-making without HAVING to take the switchover risk immediately.

Get your surprises before you go “all in”!
Also at this stage you’ll learn a lot about three areas of “surprise” in AWS (at least they were a surprise to me!)

1) Billing – it’s not what you think it is! You’re usage is often very different than your original cost estimation (Hint: It’s mostly EC2 + Database)
2) Noisy Neighbors - everything is hunky dory until it’s not because your neighbor either is hogging the physical CPU 
3) IOPS – Related to Noisy Neighbors but with I/O. You don’t have full control of this Data center. You might find your response time needs some tweaking or you need to buy more dedicated IOPS.
 
Also you can run this way for a while all the time letting the architecture bake-in perhaps moving more and more traffic to your AWS infrastructure. You don’t want to maintain parallel architectures (and code paths!) for too long. Eventually you come to a tough choice:
1) Have some writes from your AWS architecture write to the remote data tier (in the old data center) to continue the gradual change
2) Switchover the Data Tier to AWS (but still have some remote writes). At this stage you might choose to just finalize the switchover and go “all in” rather than take more of the remote data tier performance hits.
 
In an ideal world, if you are using MySQL Master-Slave replication you will have an easier time of completing the switchover without too much “shenanigans” (that’s a technical term!). Alternatively you might choose to make you application a little less chatty with the data tier (a good thing in general) so that the remote writes don’t seem so bad – and now you can gradually ramp up your Weighted Round Robin to move things over bit-by-bit until the day you promote RDS to the master.
 
Either way by the end of the process you’ll have one final “switchover” and be done – you can switch off your old machines and start to enjoy the cost benefits and all the flexibility of “on demand computing”.
 
I’d like to hear if other people have other suggestions on AWS migration strategies that have been proven to work.

p.s. One extra bonus – once you’ve got the “ramp-up and migration” process down, you can use the same process to stand-up more instances of your architecture in different regions. Sadly for now, you can’t have RDS create a read-replica in a different region but you CAN look into putting in place a scheme for putting local writes on an SQS queue / SNS topic and persisting it remotely to give yourself a “roll your own” data replication methodology.
[Edit: I just found out that AWS RDS *does* support cross-region replication link]
Standard
Uncategorized

It is time for a revolution in Tech hiring: Why we need to copy Baseball’s farm system

Lets just be real for a moment – there’s not enough software talent in the world today. There just isn’t.  At least in the West for sure – the US and Europe for definite. Also I don’t buy what the IEEE says – not by a long shot.  There’s lots of resumes out there – not much talent (a lot of that mismatch – lots of people, not enough “talent”, I think, is just really due to a lack of training).

What’s wrong with hiring and recruiting today?

Anyway no matter how good your recruiter is, they’re all fighting for the same small talent pool with the same toolset (LinkedIn).

In addition: resume screening, and interviewing is so flawed with holes and assumptions it’s ridiculous. How can you boil down 20 years of technology experience to 1 or 2 pages? How can you get a sense of how someone will perform in month 1, month 6 and month 60 based on a five or six 45 minute conversations with artificial problem sets?

It worked for “Build a Bear”

You can hire all the recruiters you want or hire the “best” recruiter, but there is an easier solution.  A famous man once said:

“The best way to predict your future is to create it”  

(that man was Abraham Lincoln by the way, although I believe it is also attribute to Alan Kay)

So you still compete for the same existing STEM talent with degrees the same way, but beyond that set here’s my advice:
1) You grab a bunch of high school kids who aren’t going to college or who graduated college without a STEM degree (pick them randomly initially and figure out how to screen better in future)

2) You train them for 6 months INTENSIVELY for a Web programming job (say start with HTML/CSS –> Javascript –> Ruby –>  REST–> MongoDB –> AWS etc.)
3) At the end of that 6 months some will be good prospects for an entry level job (and probably further training in later years) and others may not be the right fit for your company (but may be employable elsewhere).

The devil is in the details

During that 6 months you see these people during ups and downs, during challenges, in teams and working by themselves. You see them learn, adapt and hopefully have fun – isn’t that the ULTIMATE employability metric?

You pay them a reasonable salary for that time they are learning – say $25k over 6 months ($50k per year annualized) – not bad for a high school kid or recent non-STEM college grad. The payoff for the employer comes by locking the best ones into a contract (say for a few years at a slightly lower SWE pay rate) to pay back the training costs before the person can become a “free agent”.
Sports like baseball have a system like this already – a farm system – defined as “generally a team or club whose role is to provide experience and training for young players, with an agreement that any successful players can move on to a higher level at a given point”  and this is a bit I did not know : “Most major league players start off their careers by working their way up the minor league system, from the lowest (Rookie) to the highest (AAA) classification“.

Isn’t that exactly what we need? A pipeline of prospective talent? Not every software team needs uber-developers. Sometimes a few good Web devs are all you need. I doubt Healthcare.gov needed Top-10%ers.

What’s the payoff?

Such a system would be a win for employers – it’s a win for kids who might not get an opportunity to go to college or otherwise get a STEM career and this would also create a larger pool to recruit from – so it’s a win for recruiters.

It could also be used to increase diversity within technology ranks.

In addition it would take pressure off the H-1B system of work visas that are so over-subscribed their quota for a year is filled in 5 days!  For those senior tech folks already out there – they might be thinking more supply will reduce salaries – but even despite AMAZING demand for people there has been flat salary growth especially at the high end.  So much for the law of supply and demand eh! And no – 65,000 H-1Bs is not the cause of flat salaries when these positions go unfilled for 12 weeks or more and Microsoft has over 3,300 openings. Question for another day – so why are those IT salaries flat?

Anyway here’s my proposal – Software and IT firms need a farm system – Employers can’t find the talent no matter how much they pay (unless they’re facebook or apple probably) – so they need to create the talent they need. It’s a win for employers and society as a whole. Probably some unintended second order effects but lower unemployment, fewer crazy hours in software, and some more diversity of backgrounds can’t be a bad thing net net.

You can already see the demand for such a system with companies like App Academy (in SF), Launch Academy (in Boston),  the Academy for Software Engineering and the Flatiron school (both in NYC) getting off the ground. But I see the likes of Facebook, Google, Apple doing their own programs the same way the each baseball team has their own farm system.

Thoughts? Could this work? What would prevent its adoption?

Standard
Uncategorized

All Scalability problems have only a few solutions . . .

I was having a great conversation before with some technical folks about some very very hard scalability and throughput (not necessarily response time) issues they were facing.

I racked my brain to think of what I had done in the past and realized it came down to a few different classes of solutions. First and foremost though the key to finding the answer is instrumenting your code and/or environment to find out where the bottleneck(s) are.

1) Simplest: Do less work
Kind of obvious but if it’s taking you 10 hours to read through some log files and update the database perhaps the easiest thing is to do LESS work. e.g. read fewer log files or do fewer database updates.
You could use techniques like reservoir sampling.  But maybe you have to calculate a hard number – the total cost of a set of Stock market trades for example – estimates don’t work.  Then again perhaps your log files don’t need to be so big? Every byte you write has to be FTP’d (and could get corrupted) and that byte has to read later (even if it’s not needed).

I find a lot of people forget another alternative here that involves the theme of “Do less work”. Basically if you have a good (enough) model of your input data stream then you can get a “fast but slightly inaccurate” estimate soon and then get “eventual consistency” later. It’s kind of like that old adage – “You can have it fast, correct and cheap. Pick two!” or like the CAP theorem – something’s gotta give.  Every dev team should have a Math nerd on it – because Mathematicians have been solving problems like this for decades.

2) Simple-ish: Tune what you already have
Maybe you’ve got a MySQL DB – does it have enough memory?  Perhaps Network I/O is a bottleneck – dual NICs then? Check your NIC settings too (I’ve hit that once – 100 Mbps settings on GBps network). Perhaps you need to lower priority on other jobs on the system.  Is your network dedicated? What’s the latency from server to DB (and elsewhere).

Maybe when you FTP data files you should gzip them first (CPU is cheap and “plentiful” relative to Memory and I/O – network and disk).  If the write is not the problem, perhaps you can you tune your disk read I/O? Are you using Java NIO?  Have you considered striping your disks?  Not suprisingly for Hadoop speedup many of the tuning recommendations are I/O related.

Perhaps you have a multi-threaded system – can you throw more threads at it? More database connections?

For the database: Reuse database connections?  Do you really need all those indexes?  I’ve seen it be faster to drop indexes, do batch uploads and reapply indexes than to leave the indexes in place. Are you seeing database table contention – locking etc?

3) Moderate: Throw hardware at it
Seems like a cop-out for a developer to say throw hardware at it but if you look at the cost of (say) $20k in better hardware (more memory, faster memory, faster disk I/O etc.) vs. spending 4 developers for a month (costing in the US anyways $40k+) it’s clear where the upside is at.  Developers are probably the most scarce/precious resource you have (in small organizations anyway) so spend their time wisely. They’ll appreciate you for it too!

3) Harder: Fix or redesign the code you have
This is what coders usually do but it’s expensive (considering how much devs cost these days).
Are there more efficient algorithms? How about batching inserts or updates?

Do you have a hotspot – e.g. disk I/O due to 10 parallel processes reading from disk?
Is the database a bottleneck – perhaps too MANY updates to the same row, page or table?
If throughput (and not response time) is your issue then perhaps making things quite a bit more asynchronous, decoupled and multi-threaded will improve your overall throughput.

Maybe instead of a process whereby you: Read tonnes of data from a file, update some counters, flush to DB all in the same thread

You decouple the two “blocking” pieces (reading from disk, writing to DB) and that way you can split the problem a bit better – perhaps splitting the file and having more threads read smaller files? Drop all intermediate data into some shared queue in memory (or memcached etc.) and then have another pool of threads read from that shared queue. Instead of one big problem you have two smaller problems each whose solution can be optimized independently of the other.

Kind of a mix of “Fix the code” and #1 “Do less work” is when you realize you are redoing the same calculations over and over again. For example taking an average from the last 30 days requires you
do get todays new data but also re-retrieve 29 prior days worth of data. Make sure you precalculate and cache everything you can.  If you are summing the past 30 days of data for example (D1 . . .  D30), tomorrow you will need (D2 . . D31) – you can precalculate (D2 . . D30) today for tomorrow. Not that math is hard for CPUs but you get the idea . . . . spend CPU today to save I/O tomorrow!

An example of being smart about what you calculate is here in this MIT paper “Fast Averaging“.  If your data is “well behaved” you can get an average with a lot less work.

Decoupling with Queues is my favorite technique here but you have to be smart about what you decouple.

4) Hardest:  Rearchitect what you have
Developers love to do this – it’s like a greenfield but with cool new technology – but it should be the last on your list.  Sometimes however it’s just necessary. Amazon and eBay have done it countless times. I am sure Google and Facebook have too. I mean they INVENTED whole new systems and architectures (NoSQL, BigTable, DynamoDB etc.) to handle these issues.  Then again Facebook still uses MySQL :-)

Summary
Again all of these approaches, if they are to be successful and a good use of time rely on knowing where your bottleneck is in the first place – identifying it and beating on that problem until it cries “Momma!” :-)  But lets never forget that the classes of solutions are pretty constant – and the choice basically come down to how much time and money can you afford to fix it.

Ok over to you dear reader – what did I miss, what did I forget? Is there another class of solution?

Standard
Uncategorized

7 Steps to Software Delivery Nirvana

1) Hire great people
Attitude
Communications Skills
Knowledge & Programming Skill

Remember you won’t get everything you need – if you need to give up something go for a “fast learner” who doesn’t have all the technical knowledge (he or she will get there)

2) Hire great people and know what the customers priorities are
Manage Requirements Risk

Fast beats perfect  – Be prepared to demo/ship -> learn -> iterate.

3) Hire great people, only build what you need and set expectations
Manage Design Risk – especially do NOT “over design” – keep it simple and refactor as you learn

YAGNI remember “Done beats perfect”

Spot dependencies up front and prepare to manage them

4) Hire great people and show progress
Manage Development Risk – especially estimates & dependencies

- Unit Tests
- Coverage
- Continuous Integration

5) Hire great people and show a quality product
Manage test & delivery risk

Regular (continuous) releases to customers

Quality != Bug free. “Shipped beats perfection”

6) Keep your great people

Remove demotivators
- people and processes
- recurring bugs etc.

Encourage them to learn new things (balancing against delivery risk)

Understand what motivates developers & testers: autonomy, mastery and purpose.

Different people want different things: many managers think developers want to be managers.
Well if you look at most managers they don’t seem too happy to me.  It’s a hard job.
Lot of people like to design and build and solve technical challenges.

7) Sharpen the saw

  • Keep your CI fast
  • Keep learning
  • Retrospectives
  • Balance
  • There’s more to life than building software

What is NOT AS important (emphasis on “AS”)
1) Waterfall vs. Agile
2) Scrum vs. Kanban
3) Java vs. C# vs. Python vs. Ruby
4) SQL vs. NoSQL

Yes each delivers some incremental improvement (in SOME context – not ALL contexts). But if you don’t have great people it won’t matter if you use Waterfall, Scrum, Kanban in Java, Python or Ruby.

Standard
Uncategorized

What I love about code . . .

After doing more managing than normal and getting back to coding I realize just how much I like to code and why . . . .

Code either works or it doesn’t. There’s no room for subjectivity between it and me.
And if it doesn’t, you can fix it. It doesn’t have to be cajoled, mentored, advised or given feedback.
You don’t need to worry about the motivation of code.
You don’t need to worry about what code thinks about you – you can test it as much or as little as you want. It just does it’s job – gets compiled / interpreted and executed.
The code doesn’t care if your dependencies are in place or not – it just *IS*.
The code doesn’t worry about reorgs or P&L or whether it’s executed in your own datacenter or AWS or your desktop or laptop.
It doesn’t care if you have documentation or not, code coverage or not, customers or not.

But as much as that’s awesome, we live in a world that is so much more – a world where perception matters. Where we work in teams with people who are people – different, fallible, with ups & downs, with other stuff going on and often with different priorities and different motivations. Where reorgs and P&L matter. Where ultimately we need to build a product that people love (or at least like).

Code is awesome – but as Coders we can’t just live in that world – most of the “real” problems in Software are people problems – the coding problems are easy in comparison.

POSTSCRIPT: After re-reading this I should give some props to 1 Corinthians – replacing “love” with “code” :-)

Standard