Planet GRPUG

November 25, 2019

Whitemice Consulting

Uncoloring ls (2019)

This is an update from "Uncoloring ls" which documents how to disable colored ls output on older systems which define that behavior in a profile.d script.

Some more recent systems load the colorization rules in a more generalized fashion. The load still occurs from a profile.d script, typically ls.bash, but mixed in with other functionality related to customizing the shell.

The newer profile.d script looks first for $HOME/.dir_colors, and if not found looks for /etc/DIR_COLORS.

To disable colorized ls for a specific user create an empty .dir_colors file.

touch $HOME/.dir_colors

Or to disable it for all users make the /etc/DIR_COLORS files not exist.

sudo mv /etc/DIR_COLORS /etc/DIR_COLORS.disabled

by whitemice at November 25, 2019 06:33 PM

October 21, 2019

Whitemice Consulting

PostgreSQL: "UNIX Time" To Date

In some effort to avoid time-zone drama, or perhaps due to fantasies of efficiency, some developer put a date-time field in a PostgreSQL database as an integer; specifically as a UNIX Time value. ¯\_(ツ)_/¯

How to present this as a normal date in a query result?

date_trunc('day', (TIMESTAMP 'epoch' + (j.last_modified * INTERVAL '1 second'))) AS last_action,

This is the start of the epoch plus the value in seconds - UNIX Time - calculated and cast as a non-localized year-month-day value.

Clarification#1: j is the alias of the table in the statement's FROM.

Clarification#2: last_modified is the field which is an integer time value.

by whitemice at October 21, 2019 01:36 PM

September 11, 2019

Whitemice Consulting

PostgreSQL: Casted Indexes

Dates in databases are a tedious thing. Sometimes a time value is recorded as a timestamp, at other times - probably in most cases - it is recorded as a date. Yet it can be useful to perform date-time queries using a representation of time distinct from what is recorded in the table. For example a database which records timestamps, but I want to look-up records by date.

To this end PostgreSQL supports indexing a table by a cast of a field.

Create A Sample

testing=> CREATE TABLE tstest (id int, ts timestamp);
CREATE TABLE
testing=> INSERT INTO TABLE tstest (1,'2018-09-01 12:30:16');
INSERT 0 1
testing=> INSERT INTO TABLE tstest (1,'2019-09-02 10:30:17');
INSERT 0 1

Create The Index

Now we can use the "::" operator to create an index on the ts field, but as a date rather than a timestamp.

testing=> create index tstest_tstodate on dtest ((ts::date));
CREATE INDEX

Testing

Now, will the database use this index? Yes, provided we cast ts as we do in the index.

testing=>SET ENABLE_SEQSCAN=off;
SET
testing=> EXPLAIN SELECT * FROM tstest WHERE ts::date='2019-09-02';
                                 QUERY PLAN                                  
-----------------------------------------------------------------------------
 Index Scan using tsest_tstodate on tstest  (cost=0.13..8.14 rows=1 width=12)
   Index Cond: ((ts)::date = '2019-09-02'::date)
(2 rows)

For demonstration it is necessary to disable sequential scanning, ENABLE_SEQSCAN=off, otherwise with a table this small the PostgreSQL will never use any index.

Casting values in an index can be a significant performance win when you frequently query data in a form differing than its recorded form.

Tags: 

by whitemice at September 11, 2019 03:09 PM

August 31, 2019

zigg.com (Matt Beherens' blog)

On supporting a friend

I've been thinking this morning about the nature of support, and how we can offer it to our loved ones.

I think this an unfortunately really common thought pattern: in order to offer support, we have to take an active role in another's life. We have to make our loved ones' endeavors our own, we have to literally take part, right? Otherwise, the thinking goes that we're not being supportive.

But if we don't enjoy the thing, if we don't feel that personal pull, if we are personally worn-out, if our hearts are not there, is that actually support at all? Are we sacrificing a part of ourselves, tearing ourselves up inside and giving our loved ones a tattered piece of paper that says “support” that says more about how we hurt than how we love them?

There's a little mental exercise I do often, using my dear friends as foils. It goes like this: instead of asking what they would want me to do (which is colored by my own negative feelings of self), if the tables were turned, what would I want them to do? The answer is clear and rings true: I would rather see them care for themselves, do what makes them happy, and share that with me.

The recipe for good support, then, isn't that I necessarily engage directly with what makes a loved one happy—unless doing so personally brings me joy. The recipe is simply this: that I draw happiness from the fact that they are doing something they love, enjoying something, believing in a thing deep in their heart.

Of course, if we do also legitimately find joy in sharing something with a friend, we do find that shared experiences bring us closer, bring our hearts together. But you can't force hearts together to get that; they need to share a bond of mutual enjoyment. If you aren't into it, don't force yourself—rather, take joy in your loved one's joy and share that instead. That will also bring you closer together, without tearing either of you up in the process.

Consider sharing what you love with those you love, and consider that you don't need to be experiencing it right alongside them to be a good friend. You can just share in their joy. And that, right there, makes you a good friend—and I believe your loved ones would say the same.

by Matt Behrens at August 31, 2019 09:09 AM

Your candle

You have a candle. It has a beautiful flame, unique and in colors not often seen in this world.

You want everyone to share the joy you get from that candle, to understand where the flame comes from, to love its colors like you do.

But it’s not like any candle they’ve seen. And so you have to burn it brighter, hotter, really let them get a good look at it and the light it casts on your face, let them see you illuminated in its beauty.

Unfortunately, you only have the one candle. And when it’s spent, it’s spent.

It breaks your heart, but as you’ve watched that candle burn, you know… you can’t just give it to everyone, share it with everyone. You can’t make everyone look at it. There just isn’t enough to go around. You’ll burn it down to your fingertips getting it bright enough to even get them to consider looking at it. You'll eventually not be able to show anyone anything.

Some people will know you have a beautiful light and they beg to see it. But they’re carrying their own candle and won’t put it down, so you’ll need to burn yours much more brightly for them to see it. You'll risk burning it down even faster.

Some people you desperately want to share the light with, people you want to tell of the joy it brings you. But they think it’s a strange color and complain they can’t see you well by its light. If only it were a yellow flame like their candles burned with. Then they could see. Why isn’t your flame yellow?

Some wave you away when you show up with your candle. We'll let you have it, but don't bring it too close, they say. It makes me uncomfortable.

Some want to extinguish your flame. There’s no place for that here, they say. It's unnatural.

A few people, though, have their own candles that burn in their own, unique, beautiful way—like but also wholly unlike yours—and you can just touch your candle to theirs, creating something new, a unity creating brand-new colors, never seen before, yet clearly composed of each of your flames.

And that’s when it’s just you two. You can add more, and more, and more. Each of you contributing your own quiet, small flame, never burning any of your candles too much, and yet creating a robust and glorious show of light and warmth and love.

Even as you stand there, making a delightful, colorful symphony of beauty, those who do not understand the beauty you have are grumbling, saying that you all should just get candles out of the boxes they brought. They all burn the same way, and look—there are so many, we will never run out. It will be much easier for you if you just burn these candles like us.

And you take a stand and say, no. I will not extinguish this beauty. I will delight in it, share it with those who can see it as it is, those who will put their own lights down, those who will defend its quiet beauty.

And maybe, just maybe, even though they have simple candles themselves, they can use what they have to illuminate the way. They can show everyone how they can put down their bright and brash fire. They can show everyone how to approach with love and understanding—forget themselves and shed their preconceptions of what a candle shaould look like. Look at what you have to show them.

Your beautiful flame.

by Matt Behrens at August 31, 2019 08:29 AM

Review: GRIS

I haven't reviewed a game since 2011—my last was my review of Atsumete! Kirby (a.k.a. Kirby Mass Attack) for my old games media stomping grounds formerly known as N-Sider. But after playing Nomada Studio's GRIS this weekend, I felt like sitting down and writing because I have been moved in a way that I haven't been in a good while.

Nintendo has this great setup these days; if you wishlist a game on the Switch's eShop, you'll get an email when one goes on sale, which is great because perusing the eShop's games-on-sale list is charitably an exercise in “wow, there are a lot of games here that are not for me”. With the sale emails, I am thus freed from this responsibility. I wishlisted GRIS based on its launch trailer, which, goddamn, isn't that beautiful? And then I got the email and it was around ten bucks and I said “yes”.

I'll be brief about the premise: Gris, the blue-haired protagonist in the trailer, has lost their beautiful singing voice—and the game is about them working through that loss. There's nary a word apart from the unobtrusive achievements you'll unlock at various points (many of which I still have undone); the story is told through the changes in the world, the beautiful, beautiful soundtrack and art, and the layering of color. Their world has been shattered; they has their loss to cope with and their life to rebuild, and this will literally happen as you progress.

I was asked by a close friend who was actually a bit wary, wondering if GRIS could be a traumatic or triggering experience, with the main character going through a difficult loss. I don't believe it is. The striking visuals and music may make you tear up (oh hey, it's me); and there's plenty to read into the art and animation—colors representing strong emotion, the scenes of a world crumbled away, and at times fleeing from literally being swallowed by dark shapes—but it gets no more concrete than that. It's powerful without realizing the kinds of losses you may experience in the real world.

But it is moving, and in surprising ways. It feels almost cliché to describe your progression through a video game and your unlocking of abilities as part of that as “empowering”, and yet that's literally what it is, with the game's design built hand-in-hand with its narrative. The abilities you gain and the mechanics you experience are aligned with Gris' journey, starting at the very beginning when Gris can barely move, slumping and collapsing instead of jumping, right through the end when acceptance gifts them the ability to give life to the world around them. Early on, I had the game pegged as (if you'll forgive me) a “basic indie platformer” without much finesse, only to find that by the end, Gris had become strong and fluid, moving through their world with ease and intent.

I found myself experiencing some artificially-induced anxiety by the numerous points of no return—especially as there are collectible items throughout the game I could often see but never reached before they were locked off behind me—but take heart; when you've completed the experience, you'll be able to go back to several points via a chapter select and give those another shot. I've only briefly experienced this so far, but I did find it rather interesting that replaying the opening chapter made me feel authentically powerless, instead of artificially like I find myself feeling when returning to beginning of most games.

It seems to me we are firmly in an era of games seeking to be art—not in that shallow way that an industry desperately reaching for respectability did a decade ago, but instead in a truly authentic way, drawn from experiences, realized around the human condition. Much like Gris at the end of their journey, I feel GRIS stands tall, confident, and strong in this pantheon. I know from years of experience watching video games that a studio making one amazing game doesn't mean their next will be the same, but I'm nonetheless finding myself desperately curious about what Nomada may make next. Even if they never make another game like this, GRIS moved me and I am grateful for that experience.

by Matt Behrens at August 31, 2019 08:29 AM

August 30, 2019

Whitemice Consulting

Listing Printer/Device Assignments

The assignment of print queues to device URIs can be listed from a CUPS server using the "-v" option.

The following authenticates to the CUPS server cups.example.com as user adam and lists the queue and device URI relationships.

[user@host ~]# lpstat -U adam -h cups.example.com:631 -v | more
device for brtlm1: lpd://cismfp1.example.com/lp
device for brtlp1: socket://lpd02914.example.com:9100
device for brtlp2: socket://LPD02369.example.com:9100
device for brtmfp1: lpd://brtmfp1.example.com/lp
device for btcmfp1: lpd://btcmfp1.example.com/lp
device for cenlm1: lpd://LPD04717.example.com/lp
device for cenlp: socket://LPD02697.example.com:9100
device for cenmfp1: ipp://cenmfp1.example.com/ipp/
device for ogo_cs_sales_invoices: cups-to-ogo://attachfs/399999909/${guid}.pdf?mode=file&pa.cupsJobId=${id}&pa.cupsJobUser=${user}&pa.cupsJobTitle=${title}
device for pdf: ipp-to-pdf://smtp
...

by whitemice at August 30, 2019 07:36 PM

Reprinting Completed Jobs

Listing completed jobs

By default the lpstat command lists the queued/pending jobs on a print queue. However the completed jobs still present on the server can be listed using the "-W completed" option.

For example, to list the completed jobs on the local print server for the queue named "examplep":

[user@host] lpstat -H localhost -W completed examplep
examplep-8821248         ogo             249856   Fri 30 Aug 2019 02:17:14 PM EDT
examplep-8821289         ogo             251904   Fri 30 Aug 2019 02:28:04 PM EDT
examplep-8821290         ogo             253952   Fri 30 Aug 2019 02:28:08 PM EDT
examplep-8821321         ogo             249856   Fri 30 Aug 2019 02:34:48 PM EDT
examplep-8821333         ogo             222208   Fri 30 Aug 2019 02:38:16 PM EDT
examplep-8821337         ogo             249856   Fri 30 Aug 2019 02:38:50 PM EDT
examplep-8821343         ogo             249856   Fri 30 Aug 2019 02:39:31 PM EDT
examplep-8821351         ogo             248832   Fri 30 Aug 2019 02:41:46 PM EDT
examplep-8821465         smagee            1024   Fri 30 Aug 2019 03:06:54 PM EDT
examplep-8821477         smagee          154624   Fri 30 Aug 2019 03:09:38 PM EDT
examplep-8821493         smagee          149504   Fri 30 Aug 2019 03:12:09 PM EDT
examplep-8821505         smagee           27648   Fri 30 Aug 2019 03:12:36 PM EDT
examplep-8821507         ogo             256000   Fri 30 Aug 2019 03:13:26 PM EDT
examplep-8821562         ogo             251904   Fri 30 Aug 2019 03:23:14 PM EDT

Reprinting a completed job

Once the job id is known, the far left column of the the lpstat output, the job can be resubmitted using the lp command.

To reprint the job with the id of "examplep-8821343", simply:

[user@host] lp -i examplep-8821343 -H restart

by whitemice at August 30, 2019 07:29 PM

Create & Deleting CUPs Queues via CLI

Create A Print Queue

[root@host ~]# /usr/sbin/lpadmin -U adam -h cups.example.com:631 -p examplelm1 -E \
  -m "foomatic:HP-LaserJet-laserjet.ppd" -D "Example Pick Ticket Printer"\
   -L "Grand Rapids" -E -v lpd://printer.example.com/lp

This will create a queue named examplelm1 on the host cups.example.com as user adam.

  • "-D" and "-L" specify the printer's description and location, respectively.
  • The "-E" option, which must occur after the "-h" and -p" options instructs CUPS to immediately set the new print queue to enabled and accepting jobs.
  • "-v" option specifies the device URI used to communicate with the actual printer.

The printer driver file "foomatic:HP-LaserJet-laserjet.ppd" must be a PPD file available to the print server. PPD files installed on the server can be listed using the "lpinfo -m" command:

[root@crew ~]# lpinfo -m | more
foomatic:Alps-MD-1000-md2k.ppd Alps MD-1000 Foomatic/md2k
foomatic:Alps-MD-1000-ppmtomd.ppd Alps MD-1000 Foomatic/ppmtomd
foomatic:Alps-MD-1300-md1xMono.ppd Alps MD-1300 Foomatic/md1xMono
foomatic:Alps-MD-1300-md2k.ppd Alps MD-1300 Foomatic/md2k
foomatic:Alps-MD-1300-ppmtomd.ppd Alps MD-1300 Foomatic/ppmtomd
...

The existence of the new printer can be verified by checking its status:

[root@host ~]# lpq -Pexamplelm1
examplelm1 is ready
no entries

The "-l" options of the lpstat command can be used to interrogate the details of the queue:

[root@host ~]# lpstat -l -pexamplelm1
printer examplelm1 is idle.  enabled since Fri 30 Aug 2019 02:56:11 PM EDT
    Form mounted:
    Content types: any
    Printer types: unknown
    Description: Example Pick Ticket Printer
    Alerts: none
    Location: Grand Rapids
    Connection: direct
    Interface: /etc/cups/ppd/examplelm1.ppd
    On fault: no alert
    After fault: continue
    Users allowed:
        (all)
    Forms allowed:
        (none)
    Banner required
    Charset sets:
        (none)
    Default pitch:
    Default page size:
    Default port settings:

Delete A Print Queue

A print queue can also be deleted using the same lpadmin command used to create the queue.

[root@host ~]# /usr/sbin/lpadmi -U adam -h cups.example.com:631  -x examplelm1
Password for adam on crew.mormail.com? 
lpadmin: The printer or class was not found.
[root@host ~]# lpq -Pexamplelm1
lpq: Unknown destination "examplelm1"!

Note that deleting the print queue appears to fail; only because the lpadmin command attempts to report the status of the named queue after the operation.

by whitemice at August 30, 2019 07:11 PM

July 25, 2019

Whitemice Consulting

Changing Domain Password

Uh oh, Active Directory password is going to expire!

Ugh, do I need to log into a Windows workstation to change by password?

Nope, it is as easy as:

awilliam@beast01:~> smbpasswd -U DOMAIN/adam  -r example.com
Old SMB password:
New SMB password:
Retype new SMB password:
Password changed for user adam

In this case DOMAIN is the NetBIOS domain name and example.com is the domain's DNS domain. One could also specify a domain controller for -r, however in most cases the bare base domain of an Active Directory backed network will resolve to the active collection of domain controllers.

by whitemice at July 25, 2019 03:29 PM

May 24, 2019

Whitemice Consulting

CRON Jobs Fail To Run w/PAM Error

Added a cron job to a service account's crontab using the standard crontab -e -u ogo command. This server has been chugging away for more than a year, with lots of stuff running within he service account - but nothing via cron.

Subsequently the cron jobs didn't run. :( The error logged in /var/log/cron was:

May 24 14:45:01 purple crond[18909]: (ogo) PAM ERROR (Authentication service cannot retrieve authentication info)

The issue turned out to be that the service account - which is a local account, not something from AD, LDAP, etc... - did not have a corresponding entry in /etc/shaddow. This breaks CentOS7's default PAM stack (specified in /etc/pam.d/crond). The handy utility pwck will fix this issue, after which I the jobs ran without error.

[root@purple ~]# pwck
add user 'ogo' in /etc/shadow? y
pwck: the files have been updated
[root@purple ~]# grep ogo /etc/shadow
ogo:x:18040:0:99999:7:::

by whitemice at May 24, 2019 08:09 PM

April 18, 2019

Whitemice Consulting

MySQL: Reporting Size Of All Tables

This is a query to report the number of rows and the estimated size of all the tables in a MySQL database:

SELECT 
  table_name, 
  table_rows, 
  ROUND(((data_length + index_length) / 1024 / 1024), 2) AS mb_size
FROM information_schema.tables
WHERE table_schema = 'maindb;

Results look like:

table_name                                  table_rows mb_size 
------------------------------------------- ---------- ------- 
mageplaza_seodashboard_noroute_report_issue 314314     37.56   
catalog_product_entity_int                  283244     28.92   
catalog_product_entity_varchar              259073     29.84   
amconnector_product_log_details             178848     6.02    
catalog_product_entity_decimal              135936     16.02   
shipperhq_quote_package_items               115552     11.03   
amconnector_product_log                     114400     767.00  
amconnector_productinventory_log_details    114264     3.52    

This is a very useful query as the majority of MySQL applications are poorly designed; they tend not to clean up after themseves.

by whitemice at April 18, 2019 06:30 PM

April 08, 2019

Whitemice Consulting

Informix: Listing The Locks

The current database locks in an Informix engine are easily enumerated from the sysmaster database.

SELECT 
  TRIM(s.username) AS user, 
  TRIM(l.dbsname) AS database, 
  TRIM(l.tabname) AS table,
  TRIM(l.type) AS type,
  s.sid AS session,
  l.rowidlk AS rowid
FROM sysmaster:syslocks l
  INNER JOIN sysmaster:syssessions s ON (s.sid = l.owner)
WHERE l.dbsname NOT IN('sysmaster')
ORDER BY 1; 

The results are pretty straight forward:

User Database Type Session ID Row ID
extranet maindb site_master IS 436320|0
shuber maindb workorder IS 436353|0
shuber maindb workorder IX 436353|0
shuber maindb workorder_visit IS 436353|0
extranet maindb customer_master IS 436364|0
jkelley maindb workorder IX 436379|0
jkelley maindb workorder IS 436379|0
mwathen maindb workorder IS 436458|0
Tags: 

by whitemice at April 08, 2019 08:10 PM

March 18, 2019

zigg.com (Matt Beherens' blog)

Resetting a Wacom Bamboo Spark

Last week, I turned on my Wacom Bamboo Spark smartpad (no longer available, but Wacom has other smartpad models) and the two indicator lights started flashing alternately like a railroad crossing signal.

I could go through the Inkspace re-pairing process successfully, despite the lights never flashing, but the Spark would no longer recognize or record—or at the very least, would not sync—any additional handwritten notes I would make.

I contacted Wacom on Tuesday. After several days of silence, I finally tweeted angrily at them. Some DMs later and, that night, I had some instructions in my inbox on how to reset my Spark that were not available on their support site.

Here's how you reset a Wacom Bamboo Spark, using an iOS device with the Inkspace app installed.

  1. Tap the Settings menu (gear icon) in the upper-right corner of the app.

  2. Select “Your Device”.

  3. Select “Pair Device”.

  4. Turn the Spark on, and select “Next”.

  5. Hold the Spark's page button until Inkspace shows “Select your device”.

  6. Select your device from the list and select “Next”.

  7. Press the Spark's page button to confirm.

  8. Tap five times on the “Enter a unique name” label.

  9. Confirm the “Device Memory Reset” dialog by selecting “Reset”.

  10. Continue with the pairing process until complete.

I hope this helps someone out—I enjoy my Spark and was quite put out at not being able to digitize notes for a week.

by Matt Behrens at March 18, 2019 07:57 PM

A PyOhio emergency

As you may have seen, I was at PyOhio this weekend, and I was t{wee,oo}ting a lot. As such, my Apple Watch was going nuts with RTs, faves, &c. I was talking to some people in a hallway and force-pressed to clear my notifications… and somehow the “Clear All” button got stuck on my watch screen.

I kept pressing it, and nothing happened. So I tried holding down the button that normally would bring up the power-off screen…

Watch: (loudly) WHOOP WHOOP

Apple Watch has a feature that I've never had occasion to try: if you hold down the function button for even longer than it takes to get the power-off screen, it will go into emergency mode and eventually call the local emergency dispatch service.

I'm now in full panic mode. I pull out my phone and try to determine whether it's calling 911. I can't… tell. I'm frantically searching Apple's support site to find out how to hard-power-down my watch, pronto, all while holding the watch to my ear to make sure that if a voice starts coming out of it asking what my emergency is, I'm ready to explain and apologize profusely.

Finally, I find the article. “Hold both buttons down till you see the Apple logo,” it says. I did this, and finally, finally, the watch definitively powers down and I reboot it.

As far as I can tell, emergency dispatchers were never summoned, and nobody at the conference got in trouble, particularly yours truly. And I amused the people I was talking with, a little.

Phew. 😅

by Matt Behrens at March 18, 2019 07:57 PM

Lessons learned losing

This morning I woke up, weighed myself, and found I'd finally lost 50 pounds since I started pursuing weight loss in earnest in September 2016.

This isn't my first time here. Back in 2008, when Wii Fit originally came out, I also shed a good amount of weight, landing about 5 pounds or so over where I am today. But that loss wasn't as good as it could have been:

  • Intellectually, I knew that losing 5 pounds a week was unsustainable, but I had no problem shedding somewhere under that so long as I wasn't collapsing on the floor.
  • I was losing primarily by following a rule I made up for myself: “eat half of what you want to eat” rule. This left me with the mindset that I was always shortchanging myself.
  • Once I hit a place where I felt pretty good about my weight, I declared myself done. Now I could just “be healthy” without much effort.
  • Perhaps the most important: I was always pushing myself, using every last bit of energy I had on burning and focusing on eating less. I could do this then because I had a job that didn't demand much, but I was always on the verge of falling apart if something else happened in my life.

Given this, it shouldn't be any surprise that years later, all that work was undone—and then some.

Work requires energy

I struggled with the fact that I'd lost the fruits of my hard work for several years. It fed into an undercurrent of depression in my life. For a few years, I had decided that I was hopeless; that I couldn't lose weight.

Paradoxically, my new, very stimulating work at Atomic Object compounded this, by being an additional demand on my limited reserve of energy. The idea of eating less or exercising was something I tried to push through, but I'd fall flat, tired and running back to the arms of too much food and too little movement.

What finally broke this chain was a visit to my doctor. It looked like people around me were able to keep these balls in the air; was there something wrong with me? My doctor is a great listener and has an uncannily good sense of where I might want to explore a complaint, even when I'm verbally tripping over myself trying to explain what's wrong—and while he took a blood test too, he suggested that I was probably suffering from depression, and that I should consult a sleep doctor.

Sleep being a problem was something I'd never considered before. I knew I “snored a little”, but I usually fell asleep fairly readily and was able to drag myself out of bed in the morning—surely I didn't actually have a sleep disorder? Gamely, I set myself up with that appointment (pro tip: it takes a long-ass time to see a sleep doctor, and several more long-ass times to get the study, &c.—don't delay making that appointment if you think it can help you!) and simultaneously started seeing a therapist to address my depression and anxiety.

I have foggy memories of how I was when I started seeing my therapist. More importantly, I have journal entries I made from that time. I came in with a preconcieved list of reasons I thought I was suffering, things we explored but ultimately thinned out considerably. It was simply hard for me to deal with just about any adversity in my life; I'd break down. I wasn't enjoying a lot of things I used to enjoy in my life even when there wasn't anything trying to keep me down.

We worked on those things, and made progress, little by little. In addition to teaching me techniques to work on being mindful of my emotions—in particular, the one about conscious breathing and setting aside negative thoughts in particular was huge. He encouraged me to keep working on me, by pushing through the demands of sleep medicine and just do a little bit of exercise each day, always looking forward and not dwelling on the past.

The day I finally saw the sleep doctor, fortune had it that I was a little bit of a wreck. My home sleep study was months out—sleep medicine in general seems to be woefully under-resourced, which is shocking considering how many people are living with undiagnosed sleep disorders. I was a little teary, but I pushed through and became my own advocate, and walked out with a promise to slot me in if there were any cancellations and a few sample packs of Silenor to help me get through the period until I could pick up the sleep study equipment.

It took a lot out of me that day to advocate for myself, to press my case, but I'm so glad I did.

Turning point

Silenor (doxepin) is a good drug, but it's also yet another example of what's wrong with American medicine. Those sample packs helped me get what was probably the most consistent sleep I'd had in a long time. But when they ran out, the little pills were amazingly expensive and not something my insurance was interested in paying for unless I'd tried a litany of other sleep medicines first. Sorry, but fuck that—this worked, it wasn't addictive, I hoped that it was short-term.

Thankfully, I had a resourceful sleep doctor who noted that doxepin has actually been on the market a very long time, as the antidepressant Sinequan—and has a readily-available and very cheap generic, if you don't mind it coming in a disgustingly minty syrup form for some reason. I mixed mine with a glass of water every night.

With doxepin helping me sleep more deeply, my life started to change. I had more energy to work on myself with the assistance of my therapist. I had more energy to start to look at what I was eating and continue to engage in a little bit of physical activity several days a week. I was able to deal with life when it decided it hated me. I started weighing myself and tracking my exercise.

Finally, I was able to have the sleep study. The results came back in a few days. I'll always remember the call from my doctor and my internal reactions to it:

Doctor: You stopped breathing 8-10 times…

Me: That's not so bad…

Doctor: …per hour…

Me: OH MY G–

Doctor: …which is mild sleep apnea.

Me: Ha ha. What.

Several months later, thanks to some really awful insurance confusion—the sleep office thought that I could have a “titration study” to figure out what level of air pressure I'd need from my sleep machine as well as what mask fit me well; insurance didn't think I had a bad enough time to justify that and could just go home with a machine set wide open; and all of this just meant delays, delays, delays—I stopped taking the doxepin and started strapping a mask to my face every night.

It took a little while to adjust—wide-open APAP machines have a bad habit of giving you so much pressure that the mask tries to lift off your face; the sleep techs lowered the max pressure to fix that—but about a month and several changes later, I found myself with the energy I needed to get started.

Working on me

Being able to deal with depression and having better sleep equipped me to be able to work on myself like I knew I needed to. I set several goals for myself:

  • I would aim first for losing 50 pounds, at a rate of about 1-2 pounds per week.
  • I would track everything I ate and set a calorie goal for each day. (I'm using MyFitnessPal, which in itself took some effort—I quit using it once in the past because I couldn't stand the poor quality of the nutrition database, and I also wasn't super-thrilled about data in the cloud. The secret? Accept that some things are imperfect. Like many judged medical interventions, using it is more helpful than not using it.)
  • I would work up to exercising 30 minutes a day most days of the week.

In the beginning, it was really easy to lose. I shed over 2 pounds a week, and pushed to rein myself in. There are a lot of reasons for this that I've heard, ranging from losing water weight (something that happens to just about everyone when they change their digestive equilibrium and usually looks really dramatic and encouraging) to the fact that I was pretty sedentary—just thinking about getting off the couch was enough to get my heart rate to 150 bpm.

And so I started in on this. Several days a week, I'd ride my stationary bike for ten minutes. I was wearing an Apple Watch at this point (something I'd bought just for notifications, but it turned out the fitness tracking was super helpful), so I knew that the calories I lost were very close to just noise—but moving helped me feel better, and helped me up that energy level just a little bit more. Soon I bumped it up to fifteen. Twenty. Pulled myself back whenever I started to feel run-down, but made sure to do something every day.

I tried hard not to beat myself up if I missed a day, be it for injury or maybe a wave of depression. This happens. It's a little hard to imagine it happening, but I have the journal entries to prove it, and I remember the therapy sessions where I was encouraged not to keep talking about myself so negatively. Just get up and hit it again the next day. Make it a goal every day to get it done.

In the meantime, I worked on religiously tracking everything that went into my body. I'd eat about 1,500-2,000 calories a day (and also vowed never to go below that unless advised by a doctor—I was looking to establish healthy habits, not die of malnutrition) and watch the effect on the weight chart.

I started looking harder at what I ate, too: I looked for foods that helped me maintain energy while having a low calorie load. This has gradually shifted my diet, reducing but not eliminating carbs and consuming a lot more protein. Keeping an eye on that balance. Watching my body's reaction.

I started walking to break up the stationary biking monotony. At first, this was a huge calorie burn, especially because my heart rate went pretty high every time I exerted myself more than a little bit. Over time, it became easier; today, I need to keep a pretty quick pace to even approach 120 bpm. (I don't run—I have some ankle problems, and surgery is not a great option for me. But walking is okay.)

Through it all, I kept up therapy, and I worked hard sticking with that APAP machine to keep my sleep going. There were times where I slid back on one or the other, and let me tell you, if one vertex of that triangle of sleep, emotional well-being, and physical well-being becomes stressed, it pulls on the other two. But I came back out every time and got things back into balance, and I'm super proud of myself for it—and it's now so much easier to get things back on track the next time, because I know what to do, and I know what works for me.

Into the future

It's much harder to lose weight where I am now—I'm only shedding about ⅓ pound a week now, and sometimes it feels like a push—for awhile I thought I might never get to write this post. But on those days, I draw a line on how much I'll push myself, and remind myself that hey, you got this far, you're staying here and not sliding back, and that's awesome. A month ago, I even started bicycling four miles to the bus stop to work and four miles home again. I never would have dreamed that would have been me a year ago, but here we are. As I write this, I've got my silly little bike helmet next to me. It feels great.

At my last physical, my doctor's PA (who is also really good—good doctors are so important) and I agreed on a target weight that's still another 30 pounds away; that seems like it's going to take awhile. I'm planning on going for it, but you know what? I'm so much happier where I am today. If it turned out that my 40-year-old body was at its equilibrium point here at the end of all things, I'd be 100% content with being where I am. I feel good (most of the time), I sleep well (most of the time), and I'm able to deal with my anxiety and depression like a ninja (most of the time). It's a great place to be.

So what should anyone else take away from all of this? I'm not trying to be a diet and exercise guru, but I do know this:

  • I never would have made it this far without that first visit to the doctor setting me on my way.
  • I needed energy to succeed at losing weight and getting fitter, and I wasn't going to get it without solving the problems that were draining that energy away.
  • Physical well-being, getting good sleep, and emotional well-being are three vertices of a triangle that needs to be in equilibrium. If one vertex gets stressed, it pulls on the rest.

I sometimes wish it hadn't taken me 40 years to learn these things. But that's okay. I know it now, and with that knowledge I'm changing the future.

One more lesson

There's one other high-level lesson I'm pleased to have pulled from this: science wins. Sleep science knows that sleep deprivation affects you negatively. Psychological science knows that mental illnesses affect you negatively. Dietary science knows that calorie restriction is the only proven way to lose weight. Exercise science knows that exercise makes your body work better. All of these lessons have been proven out, and I was fortunate to have good professionals helping me along at every step of the way.

I didn't get any of this from exploitative, pseudoscientific charlatans, whose thumbs I suspect many people are struggling under today. You can get a quick “win” feeling from a product that sounds cool but has no proven benefit (and possibly some awful risks), but you won't improve your life from anything that product gives you.

So, please. Stick to science—because I care about you and I want you to get better. Change your doctors if you're not getting what you need. Work at everything that drains you, a little bit at a time. Advocate for your own care. Advocate for the care of others—yes, this includes doing so politically! But don't throw your money or encourage others to throw theirs at unscientific “wellness”; it can't help you.

I may not know you, dear reader, personally. But I care about you and I want you to work on yourself. Your needs are almost very different from mine, but maybe my story can help encourage you to attack what's draining your energy and remove it from your life, giving you what you need to improve yourself. Maybe you even have sleep or mental illness to deal with specifically, in which case, great! Talk to your doctor about this. Talk to them about whatever it is that's draining you.

But most of all, love yourself by treating yourself well. You deserve it.

by Matt Behrens at March 18, 2019 07:57 PM

Retiring brewdo

It's been a long time since I've written a post just for this site, hasn't it?

Nearly three years ago, I joined Atomic Object. (I actually started in August 2014. We tend to publish our welcome posts a little while after new Atoms settle in. Also, holy cow. Look at me then and now. I guess I have lost a lot of weight!)

At the time, I was pretty actively blogging here and exploring my role in the tech community, sharing projects on my GitHub both actively-used and experimental, and even trying out giving talks. In retrospect, I was doing a lot. And it took a lot of time in addition to my now much-more-engaging work as an Atom, which is probably why my GitHub contribution graph seems to have dropped off since that August.

And so I come now to some housekeeping I've been doing today. One key piece of that housekeeping was deciding what to do with brewdo.

You can read up on why I originally made brewdo here on the blog. Since then, Homebrew has added their own sandbox, which addressed the most important thing that brewdo does that I care about. I've been running Homebrew in $HOME/Library/Homebrew with that support in play for some time, and I've been really happy with it.

So I think now is the time to mark brewdo as unmaintained. I get bug reports on it every so often, mostly having to do with migration or installation, problems that take a lot of effort to even work on. And I just don't have the personal energy for it. I want to make room in my life for other things.

I'll be doing that shortly. If someone wants to take over brewdo, I'd be thrilled to pass it on to them! Just get in touch.

by Matt Behrens at March 18, 2019 07:57 PM

Representing function properties in TypeScript

iWe’ve been using TypeScript on an Electron project. It’s been a huge win already—a little additional upfront investment gives us more confidence that our code is correct and reduces the chance that it will pass unexpectedly-shaped objects around, a source of many bugs in my past Node applications.

But sometimes, it’s not immediately clear how to type certain kinds of objects. You can, of course, represent these as any whenever you need to—but any any you rely on can weaken your code’s quality. Last week, I discovered another way to avoid falling back on that crutch, thanks to the power of TypeScript’s type system.

Electron applications rely on IPC to communicate between their main Node process and the renderer processes that present the user interface. Because our application uses IPC extensively, we decided to wrap Electron’s IPC libraries in a lightweight custom object that could emit log messages. This would allow us to trace IPC problems, and it could easily be replaced by a fake IPC implementation for unit testing.

To implement the logging of incoming IPC messages, we attached a wrapper function to Electron’s IPC library instead of the requested listener, like this:

ipcMain.on(channel, (event: Electron.IpcMainEvent, ...args: any[]): void => {
  console.log(`heard ${channel}`, args);
  listener(event, ...args);
});

This worked great until we needed to implement one new piece of functionality: removing a defunct listener.

I’m Not Listening

Removing a listener from an EventEmitter is important in a long-lived process, especially if you’re attaching listeners to a long-lived object like Electron’s IPC implementation.

If you fail to do this, you’ll not only be leaking memory by creating references that can’t be garbage-collected. You’ll also potentially be setting your application up for hard-to-trace bugs when zombie listeners you didn’t think were still around come roaring back to life.

If you’re simply listening to one event, solving this problem is fairly easy—just use .once instead of .on, and the EventEmitter will take care of it for you.

If you’ve got multiple listeners, though—like a pair of success and error listeners, one of which must remove the other, you must use .removeListener—and that requires a function reference to identify which listener to remove. Because we wrapped the real listener, we need to ask the EventEmitter to remove our wrapper, which we don’t have a reference to—and tracking it is an exercise in complexity that I’d rather not add to a wrapper class.

The solution I arrived at involved attaching a .wraps property to our wrapper functions, holding a reference to the listener function:

function wrapCallbackWithLogger(callback, message) {
  const listener = (event, ...args) => {
    console.log(message);
    callback(event, ...args);
  };
  listener.wraps = callback;
  return listener;
}

This allowed me to write code that would search the listeners attached to any particular IPC channel for the wrapper function wrapping the listener we were asked to remove:

const listenerToRemove =
  listeners.filter(candidate => candidate.wraps === wrappedListener)[0];

Unfortunately, none of this made TypeScript very happy. And that is as it should be; Functions don’t have wraps properties!

Declaring Our Intent to Wrap

The very first thing I needed to do was declare some types so that TypeScript would understand the shape of our wrapper function. The function I wanted to wrap was easy enough; Electron types already had IpcMainEventListener and IpcRendererEventListener for both sides of its IPC implementation. I decided to write my own generic listener type:

declare type IpcEventListener<E> = (event: E, ...args: any[]) => void;

Now that I had this type, I could extend it with the .wraps property easily:

interface WrappedIpcEventListener<E> extends IpcEventListener<E> {
  wraps: IpcEventListener<E>;
}

Building the object was a bit trickier. In my original, TypeScript inferred listener as a basic callback for the IPC event listener, so it wouldn’t allow me to add the wraps property, and the basic callback didn’t satisfy WrappedIpcEventListener. The solution turned out to be doing it all in one step:

function wrapCallbackWithLogger<E>(
  callback: (event: E, ...args: any[]) => void,
  message: string
): WrappedIpcEventListener<E> {
  return Object.assign(
    (event: E, ...args: any[]) => {
      console.log(message);
      callback(event, ...args);
    },
    {wraps: callback}
  );
}

Object.assign was the final ingredient to making the wrapping work—it took the wrapper callback and a new object containing just the wraps property. The result matched the WrappedIpcEventListener interface perfectly.

Making the filtering work required a little cast (as the listeners method on EventEmitter returns Array<Function>), but I was comfortable with it. If a candidate function didn’t have a wraps property, it would return undefined, never matching the listener we want to remove:

const listenerToRemove: WrappedIpcEventListener<E> =
  (listeners as Array<WrappedIpcEventListener<E>>)
    .filter(candidate => candidate.wraps === wrappedListener)[0];

With all this in place, the TypeScript compiler is happy, and we’re happy because we keep our extraordinarily useful IPC wrapper.

by Matt Behrens at March 18, 2019 07:57 PM

Can the macOS Disk Utility really erase an SSD?

Laptop computers, especially those with a lot of internal storage, are very convenient. In the same amount of physical space that a magazine would take up, we can carry an amazing amount of data with us and work with it anywhere. One flip-side of that benefit is that all that data remains inside that computer even after we’ve moved on to a new one, unless we take steps to erase it first.

With older laptops featuring spinning magnetic hard disk drives, a lengthy, random erase process was the best way to go. But that’s not true for modern MacBooks with their solid state drives; Apple has even removed the option. So how do we go about erasing these computers? And do those processes work?

Note: Since this article was first posted, there has been some confusion about the setup used. I’m using a MacBook Pro with its built-in SSD. I’m also running Disk Utility directly on the MacBook itself, not over Target Disk Mode. This process has always been YMMV, but particularly if your setup is different than mine, expect variations.

The Best Way

By far, the best way to keep your data secure is to use full-disk encryption, e.g. FileVault. Every bit of data you write to any disk after you’ve enabled FileVault on it is unreadable without the key, protecting it even if you lose the computer or it’s stolen.

Erasing the computer is now really easy, too. Everything on the computer is useless without the encryption key, so you simply need to erase the key itself. Since the key is cryptographically secured by your password, you just need to not sign into the computer—but you can also erase the encrypted key, too, with a simple disk erase.

But what if you didn’t use FileVault? Your disk is now full of data that could be sensitive. You’ll have to get rid of it somehow.

The Fallback Way

Apple recommends that, if you’re giving away or selling your Mac, you should simply erase it with Disk Utility first.

This advice puts people like myself, who have had long histories with hard drives and understand how they “delete” data—by leaving it around and just “losing track” of it—on high alert. If you just did a simple, quick erase on a hard drive ten years ago, any competent data recovery software would turn up a goldmine of data.

Erasing a disk the quick way in those days only put a new filesystem header on the front of the disk, like replacing the table of contents of a book with an empty one, but leaving the rest of the pages in the book intact. They did this for speed; overwriting all the data on a disk takes many hours. But it leaves a lot of data behind, which is why you’ll find plenty of articles advising how to use the macOS command line to force a hard-drive-style secure erase—where you overwrite it with random data many times—on a solid-state drive.

Thankfully, there’s a way that you can have a modern hard drive—old-style spinning or solid-state—erased very quickly, and securely. It’s a close cousin to full-disk encryption, and it’s called a secure erase.

A new drive that’s capable of secure erase has a random encryption key generated for it on day one. That key is kept on the drive, and all data written to it is encrypted with that key. When a secure erase is requested, that key is destroyed, leaving all the encrypted data unreadable.

Apple, being Apple, isn’t telling us (at least, not anywhere I can find) if their Disk Utility erase process is actually a secure erase. I decided to look into whether a Disk Utility erase does leave easy-to-read breadcrumbs behind, or whether it cleans up after itself.

Creating Some Data to Find

A disk—any disk—is basically a giant file, the size of the entire disk. The easiest way to look for data to be recovered on a disused disk is to scan it, beginning to end, and look for patterns that indicate useful data.

The first thing I needed to do to test this out was fill a disk with data I could easily find again. To do this, I took the Ann Arbor office loaner MacBook—recently erased from its last borrower—and half-filled its disk with a bunch of files.

(Warning: if you do this, you’re going fill your disk with junk—25,000 copies of a 4.6 megabyte file containing 100,000 copies of the phrase “The quick brown fox jumped over the lazy dog.”—enough to fill half a 256 gigabyte SSD, which was my goal.)

$ for n in `seq 100000`
> do
>   echo 'The quick brown fox jumped over the lazy dog.'
> done >template.txt
$ for n in `seq 25000`
> do
>   cp template.txt template_$n.txt
> done

That done, I verified that the disk space was actually taken up.

Now, to inspect the raw disk, I had to reboot; macOS doesn’t allow access to the raw disk device with standard Unix tools, even if you’re root. I also found out the macOS recovery partition didn’t have the tools I needed, so I booted Ubuntu instead.

Once in, the incantation to scan the disk—this will read the entire disk in 1 megabyte chunks, and pass it through a hex dump tool that we can use to visually inspect the data:

# dd if=/dev/sda2 bs=1024k | hexdump -C

And a large portion of the output—which I stopped, because it would take far too long to visually read the whole disk—looked like this:

Erase and Aftermath

If I were to do a naïve erase of this disk by writing just a new filesystem header to the beginning, like most old-school disk erases did, the vast majority of this data would still be fully readable.

But I wasn’t planning on doing an old-school disk erase. My next step was to reboot into the macOS recovery partition and erase the disk with Disk Utility like Apple advises.

I didn’t bother reinstalling macOS into the newly-erased drive. It might overwrite some of the data if it hadn’t been completely erased, but it certainly wouldn’t overwrite all of it regardless. Opting to skip the install step entirely gave me the greatest chance to find any trace of the data.

Once erased, I rebooted into Ubuntu one more time, and ran the same command. The output was much shorter this time—I let it run to the end, seeing no trace of my data, but just this:

The middle is where our data would’ve been—it’s over 250 gigabytes of zeroes. Apple’s recommended erase procedure has, in the space of a few seconds, replaced all our old data with a big empty expanse of nothing.

Conclusion and Caveats

So what does this mean? This is exactly what I’d expect to see if Apple had, in fact, implemented a secure erase with Disk Utility, like we suspected. It means that whatever data you had before the erase is inaccessible to just about anyone who acquires your computer, which is great news for anyone who might want to grab a copy of Disk Drill and start digging.

It doesn’t mean that data is guaranteed to be gone, however. Unless we have evidence that Apple actually is secure-erasing the drive, there are processes by which more well-resourced adversaries could recover data—for example, if they were simply marking every part of the drive as “free”, it’s possible someone could convince the SSD to give up that data once again.

Given this, your safest bet is still to always use full-disk encryption on any MacBook. However, I think it’s reasonable to assume that unless your threat model includes adversaries who will spare no expense to recover your data, if you haven’t used FileVault, you don’t need to be anxious that data you wrote in the past to this computer is a problem.

My recommendation is this: use FileVault going forward, and make sure you give your computer a regular erase before you give it up.

This article originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Virtual network customization in VMware Fusion

VMware Fusion is a powerful tool for developers that need to virtualize systems. Its networking functionality is also powerful, but somewhat hidden. In my latest post, I dive into customizing virtual networks over at Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Security hygiene for software professionals

A topic near and dear to my, and I hope every software professional's heart is how to be as secure as possible. I've covered a number of ways you can practice good security hygiene over at Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

In memoriam

Content warning: death, mourning.

I've felt significant loss in the last part of 2018. We lost my spouse's father, a wonderful, kind man who loved his grandchildren. We lost my nineteen-year-old cat, the most special pet I've ever had, who loved everyone he saw and always wanted to be involved in what we were doing.

I've been thinking about what it means for someone to pass on. Religious schools of thought often teach us that the souls of the departed move on somewhere else, but as I've developed my own spirituality I've come to think differently—not least of all because this thought makes no room for the dear friend who came back, not from the dead, but from a long and saddening absence.

I know people take comfort in the religious idea that those who we've lost are in some kind of beyond-the-grave contact with those they've left behind. I believe there's merit to this—that it's our memories of them that continue to touch us.

Those who were close to us leave a deep imprint on us, and when we see them in our dreams, speaking to us about modern concerns they did not experience while they were still with us, I believe it's the collection of experiences we had with them and the patterns they impressed on us roaming our subconscious minds and building these new thoughts.

Even in our waking hours, we find the emptiness of life without these people difficult to bear. They've become a part of us, just as we were a part of them. We feel that absence whether they're just gone for a time or gone forever, and we fill that hole in our hearts with old memories, building on them and making them into something new.

In this way, I believe we can derive some comfort from what we had with those we once had with us, helping us process and mourn. We don't need to specifically embrace any given belief system to touch this—we don't need to think “well, they're gone, and that's it,” because we were all touched, down to our core, by our loved ones.

And they'll always be with us. We were changed by their presence in our lives. We were deeply enriched for having them close, and they will always live with us, until the day we pass on, leaving others with memories of not just us, but everyone that came before us as well.

And I, for one, take great comfort in that thought.

by Matt Behrens at March 18, 2019 07:57 PM

Natalie Nguyen

A year ago today, a young woman named Natalie Nguyen committed suicide, and her death reverberated through the community on Mastodon that I had only been a part of for a few months. I learned about it the next day.

She was not a part of my immediate circles, though we shared many friends. I could feel the pain of her loss through them. She was a light in their lives and extinguished far too soon.

But as if it wasn't cruel enough that the world took her from those friends, what happened afterward hurt them all more. The news reports originally called her a young man. And after a brave crew of those who knew her sought out her parents and shared the Natalie they knew, those same parents buried her in a suit under a name that wasn't hers.

I'd say those friends were shocked, but it was a story they were all too familiar with. Natalie was a transgender woman, a beautiful soul, subjected to the tortures of a world that refused to accept her for who she was. So many of her friends shared that experience—the happiness of living as they were, but the pain of constant denial from those around them.

Some of our community memorialized her in the network messages that move even today through the Mastodon network, piggybacking on communications between the servers. Every time one of those servers answers a request, it says “X-Clacks-Overhead: GNU Natalie Nguyen”, keeping her memory alive.

Today, my friends are crying, remembering. I'm crying for them—I don't want them to hurt. I write this now, mostly because it's heavy on my heart and I must, but also in the hopes that some hearts, somewhere, unfamiliar with the pain our queer family shares, understands… and perhaps takes some small action to make things better for all of us.

We all watch out for each other, however we can, in this big family I'm a part of. Many of us know what that pain is like. We hope that together, we can hold each other, be there for each other, help each other. Because we all deserve to live.

Natalie will live on in so many hearts. She touched mine, even though I never knew her. I hope that, through me, she touches yours as well.

“if my existence makes random people on the internet happier, then i did good in this world.” —Natalie Nguyen, September 16, 2017

by Matt Behrens at March 18, 2019 07:57 PM

A JavaScript object that dynamically returns unknown properties

In our current project, we make extensive use of JavaScript objects as dictionaries, with the property name functioning as a key for the object we want to look up. We can use the in operator to test for property presence, and the dictionaries are perfectly JSON-serializable.

However, when it comes time to build test fixtures around these dictionaries for testing code that might look up lots of different keys, creating the test data for all of these keys becomes a large effort. Luckily, ES2015 has a solution.

The Old Way

Before I found this solution, I had code that looked like this:

function generateValue(key) {
  return {data: key + '-data'}
}

export const FIXTURE = {
  a: generateValue('a'),
  b: generateValue('b'),
  c: generateValue('c'),
  d: {data: 'some-real-meaningful-data'}
};

This worked, but as I mentioned, we were looking at having to build out lots of these generated values.

The New Way

Thankfully, Proxy around a JavaScript object allows us to override key behavior, including property lookups and retrieval. It turns out to be really handy for this use case.

We can keep our generateValue function, so that we generate unique values for every key in the dictionary. We can also keep any non-generated values. Our new fixture code looks like this:

export const FIXTURE = {
  d: {data: 'some-real-meaningful-data'}
};

export const MAGIC_FIXTURE = new Proxy(FIXTURE, {
  get: (target, prop) => prop in target ? target[prop] : generateValue(prop),
  has: (target, prop) => true
});

We’ve defined a new fixture, a MAGIC_FIXTURE that has special lookup behavior:

  1. For any property access, it will first check to see if the wrapped object has the requested property, and if so, return it. (This allows consumers to still access the fixed d property.) If it doesn’t exist, it generates and returns a new one on the fly.
  2. It claims to have any key requested. This allows consumers to do a check such as 'a' in MAGIC_FIXTURE—a common pattern we use in assertions in our production code to catch invalid accesses.

While working with the Proxy object for this problem, I realized I could create a new kind of dictionary as well—one that would automatically assert that a requested key was present, throwing an AssertionError if it wasn’t there:

const assert = require('assert');

function safeDictionary(dict) {
  return new Proxy(dict, {
    get: (target, prop) => {
      assert(prop in target, prop + ' key not found');
      return target[prop]
    }
  });
}

Proxy objects support lots of other behavior overrides as well, and they can be used on many things—not just basic objects like this.

Of course, you should be very careful using them. You can very easily cause unexpected behavior if you’re not careful to keep consuming code’s expectations met—but they can provide very powerful capabilities when passed into code you don’t control.

Happy Proxying!

This article originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Setting up Windows to build and run Node.js applications

Node.js is just JavaScript, right? So it should be really easy to run Node.js applications on Windows—just download and install Node, npm install, and go, right?

Well, for some applications, that’s true. But if you need to compile extensions, you’ll need a few more things. And, of course, with Node.js itself being constantly under development, you’ll want to lock down your development to a version your code can use. In this post, I’ll talk you through how we get our Windows command-line environments set up for the Node.js (actually, Electron) application my team is developing.

First Things First

No one wants to waste time hunting down downloads for a development environment. Instead, install Scoop first, and you’ll get a nice, clean way to add the packages you’ll need without a single web search.

Once you’ve got Scoop installed, it’s time to add some packages. For just Node.js, you’ll want the nodejs package, plus nvm for version management with NVM:

scoop install nodejs nvm

If your project uses Yarn, as ours does, you can grab that from Scoop, as well:

scoop install yarn

If you’re planning on checking out or committing code to GitHub, you’ll also want tools for that:

scoop install openssh git

To finish setting up Git with OpenSSH, note the post-install message that tells you to set up the GIT_SSH environment variable.

Finally, in case you want to quickly do things as an administrative user (which you may, later in this post!), I recommend you install Sudo, which knows how to elevate privileges inside a PowerShell session without spawning a brand new one:

scoop install sudo

Managing Node.js versions

The next thing you’ll want to do is make sure you’re on the right version of Node.js for your project. We’re using the latest LTS version for ours, which as of the time of this writing is 8.11.2. So we issue two NVM commands to install and use it:

nvm install 8.11.2
nvm use 8.11.2

If you’re familiar with NVM on Unix-like systems, you’ll find it works a little differently on Windows with Scoop. When you use a new Node.js version, it will update the binaries under scoop\apps\nvm instead of in $HOME/.nvm.

If you use a version and it doesn’t seem to be taking effect, check your PATH environment variable in the System Properties control panel (search for “environment”); it’s probably been re-ordered. Move the path containing scoop\apps\nvm to the top, and the NVM-selected version will now take precedence.

Compiling Extensions

We don’t have any of our own extensions that need building in our project, but some of our dependencies (namely, node-sass) do.

Extensions like these are built with node-gyp, and node-gyp needs two things: Python (2… wince) and a C compiler, neither of which are standard equipment on a Windows system. If you don’t have them and you need them to build extensions, you will see a long string of gyp ERR! messages when you install dependencies.

Thankfully, there’s a reasonably easy way to install them already configured for node-gyp: windows-build-tools.

After you’ve installed the Scoop nodejs package above, and assuming you installed Sudo, you can now run:

sudo npm install --global --production windows-build-tools

Note that we have observed these installers rebooting a system at least once, which effectively aborted the process. We fixed this in this one case by re-running the installer like so:

sudo npm uninstall --global windows-build-tools
sudo npm install --global --production windows-build-tools

The Moment of Truth

If all the installations worked, you should be ready to go. For our application, a

yarn install
yarn start

was all we needed—of course, you’ll want to start your application however you do normally.

In our case, our application started up and we were off and running.

This post originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Feeling Pride at Atomic

I am a bisexual man, and last November, I came out to everyone at Atomic.

In any other job I’ve worked, I likely would have endlessly vacillated and probably just mentioned it in passing to a few coworkers. “Who needs to know?” I would have asked myself. And I would have kept quiet.

But from my friends here, I felt support. Respect. I knew that in this environment, I could bring my whole self and freely advocate for all my siblings in the LGBTQIA+ community. What I didn’t expect was how much making that move would pay off for me personally.

The day I came out to Atomic feels like so long ago now. I was surprised to go back in Slack history and find out that it was actually just a little over half a year ago. I mentioned my own orientation at the same time I was sharing Invisible Majority, a report on the disparities bisexual people face in their lives and at work, on our internal discussion channel for inclusion-related topics. That very day, another Atom raised her hand and joined me.

Maybe it feels like so long ago in part because it’s been a long journey for me to get here. Well over two decades ago, I knew something was different about me, but the culture I grew up in told me that my “something different” was wrong. It took me many years of working through a good amount of internal negativity, followed by a long stretch of hiding my true self from everyone but my spouse and a few very close friends, to get to the point where I could finally be out as who I truly am.

Along the way, I’ve seen the struggles of many people who are kept at arm’s length for who they are or how they love, but love proudly nonetheless. I’ve heard so many stories of wedges driven between family members over one’s identity, and stories of acceptance within brand-new families made up of LGBTQIA+ friends. I’ve been saddened by people having to hide who they are because it’s the only way they can function in society, but heartened to know they still believe in themselves. I’ve learned a lot about the history of pain, struggle, and victory in the LGBTQIA+ community—my community—and I want to work toward a world where we are understood and celebrated, instead of feared.

Today, we have a small, but more-than-representative group of LGBTQIA+ Atoms across both offices. We’ve celebrated with each other how good it feels to bring our whole selves to work. We have and continue to critically look inward and seek to effect change to make Atomic more inclusive. We scrambled to find something ostentatiously rainbow-colored for me to wear on my birthday earlier this year. But primarily, we are together to be a community where we understand each other.

At Atomic, we offer benefits to all Atoms’ legally-married partners. We made our restrooms clearly gender-neutral. We specifically invite all Atoms’ significant others to our social events. We joined the Michigan Competitive Workplace Coalition with the goal of updating Michigan’s civil rights law to include sexual orientation and gender identity. (I was recently very happy to hear about progress toward that goal!)

But what has ultimately touched me most has been the love and support I’ve received from several Atoms since I took that step. These Atoms have made me feel more welcome as my real self than I know I would have felt working anywhere I have before.

Being out at Atomic has been a great experience. And I want everyone, everywhere, not just at Atomic but all over Michigan, the United States, and the world to have experiences like this—to be free to live, be and—most importantly–celebrate who you are.

That’s why I was personally inspired to write this post. Nobody asked me to, though several Atoms I spoke with about the idea encouraged me. I wanted to share my experience with my siblings in the LGBTQIA+ community, as well as my hope that you have an experience like mine, wherever you are.

Happy Pride. Be true to yourself. And give your love and support to everyone, no matter who they are, or how they love.

This post originally appeared at Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Review: end-to-end encrypted notes with Standard Notes

I’ve been looking for a software solution I can trust for writing, journaling, and taking notes securely. Many options exist, but they never quite fulfilled the demands of my wishlist: multi-device, cloud-synced, end-to-end-encrypted, and open.

A few months ago, though, I discovered Standard Notes, and now I can’t imagine accepting any other solution.

Standard Notes feels like the kind of solution I’d engineer if I were calling all the shots. The service is entirely open-source, to the point that you can self-host it. It’s simple by default, giving you exactly and only what you need. It stores only end-to-end encrypted blobs of data, meaning the server never has access to your data. The software takes pains to protect your data against loss. And despite all this nerd-tier stuff, it’s very easy to get started.

As of this writing, you can sign up for the free tier on their website and start using Standard Notes immediately, with unlimited cloud-synced note storage and access to all the clients—web, mobile, and desktop. It’s almost too simple to mention.

One of the most useful features you get, even with the free tier, is Device Storage Encryption. In short, this means that even if you’re using full-disk encryption, there’s an extra layer of security to make sure that your keys are never stored unencrypted on the system, and your notes are securely encrypted whenever the app is closed. All you need to do is enable Passcode Lock in your account settings on the desktop to get this support; on iOS, just turn on Storage Encryption, and maybe Fingerprint Lock while you’re in there.

The free tier doesn’t give you access to any extensions, but it does give you the aforementioned unlimited note storage and the standard plain-text editor. I installed apps on my iPhone and my MacBook to start, turning on DSE to give my notes extra protection.

I really like having a place where I can just write…anything. Scratch space for writing something that I’m going to publish or send to someone. A quick outline of a brain dump someone is sharing. Private thoughts, journaling happenings in my life. I can do all of this on my desktop or on my phone, depending on where I am, at any time.

I never have to worry about what I write living on someone else’s server, protected by their encryption keys—everything is always under the keys only I have. Writing with this freedom is something you can’t get with other cloud-based solutions that access and/or store your unencrypted content. With this solid, secure architecture in place, I even felt comfortable recommending Standard Notes to my therapist for other patients who might find it useful for journaling.

I ran with this setup for probably a week before I decided that although I was perfectly happy with it, I wanted to both support the project and get easy access to those extensions.

Standard Notes extensions are for the desktop and web apps specifically. They run the gamut from Markdown, HTML, and Vim-emulating code editors to to-do lists and themes, as well as automatic sync, backup features, and even a feature that lets you publish selected notes to a blog.

I’m personally only using the Advanced Markdown Editor, which formats your documents live as you use Markdown conventions and offers a live preview option besides. Whatever extensions you’ve used are automatically available wherever you use the web or desktop apps, so when I added Standard Notes to the inexpensive Windows 10 laptop I picked up last year, everything worked exactly the same way it did on my MacBook.

Supporting Standard Notes feels different from subscribing to many other software services. I can actually do just about everything myself—it’s all on GitHub (including the extensions!) and I could certainly self-host it all. But I feel compelled to support this project because it’s been desperately needed in the world, filling a niche that hasn’t been adequately explored, and doing so in an amazingly open way. Its existence is a dream come true for me, and I want to make sure it’s sustainable.

If you’re looking for a place to do your writing, note-taking, or journaling, I strongly suggest you take a look at Standard Notes. I was amazed that it existed when I found it, and I’m a dedicated user and proud supporter now.

This post originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Why a no-moonlighting guideline benefits employees

I had an old employer reach out to me the other day asking if I’d like to do some contract work for them. As I have in all these situations, I recalled Atomic’s guideline for Atoms—we should not do work on the side that competes or conflicts with Atomic’s business.

While it’s immediately clear how such a guideline protects Atomic’s business, I’ve also found that it’s really helpful for me personally.

Sustainable pace is an important Atomic value—one that attracted me strongly to becoming an Atom in the first place. It’s something I strive to live out personally, and something I watch my fellow Atoms for, so I can help support them if they’re feeling stress and are at risk of spending more energy than they have.

Atoms commit to a roughly forty-hour week, spending the majority of that delivering value to clients, and a small part sharing responsibility for the business and for each other. We go home and pursue other interests every day, which keeps us in balance, not just to give us the energy to do good work for our clients the next day, but also to make us richer human beings.

Moonlighting threatens sustainable pace by asking us to push past that sustainable pace. It erodes our ability to be the best we can be during the day, as well as after we close our computers and leave for the day. It turns us from healthy human beings into constantly-drained machines, never getting the chance to recharge our brains, wiring them to do just one specific thing instead of being all that we can be.

Working for a past employer again specifically can also stunt our growth. Positions we’ve held in the past are part of us; they have made us better consultants by giving us a wide range of experiences. But returning to those positions is often a return to old mental pathways well-explored; it’s better for both us and those employers that new people come on to bring new perspectives and add to their own experience. Atomic can even help them here, if it makes sense for them to work with us, by letting them work with new-to-them faces from our own team.

To be all you can be as a consultant, and as a human being, I believe diversity of experience is critical. Being able to focus on each challenge at Atomic in turn as we move from project to project, and being able to put it all down, live and have a healthy balance in our lives, makes us stronger at our jobs as well as better human beings.

And that’s why I have to politely decline when an old employer asks if I’d like to do work on the side for them, and why I steer them toward working with us, if it’s appropriate. Moonlighting is not just something that’s in competition with Atomic; it’s very much in competition with me being my best self.

This post originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

Spreading the spread and rest love

JavaScript’s spread syntax has proven to be an extremely useful tool while working with immutable data structures as part of a React/Redux project.

Now that it’s widely available for objects in LTS Node 8 (as it has been for some time for other runtimes via TypeScript), it’s interesting to go back and take a look at all it can do.

Object Spreads

In our codebase, object spreads get the most use by far. They look like this:

const x = { a: 1, b: 2 };
const y = { ...x, c: 3 }; // y == {a: 1, b: 2, c: 3}

Using spread syntax, we expressed that y, a brand new object, should be composed of all of x’s properties and values, with c added to it. Most crucially, x is not modified at all—it is exactly the same object, untouched, as it always was.

Not modifying x satisfies a requirement for shallow immutability—that is, we know that if we keep a reference to x, it still has exactly the same property list that it always had, and none of its properties will point to any new objects. But we now also have y, which is x, but subtly changed.

It’s important to remember what shallow immutability doesn’t give us, though. Notably, if any of x’s properties are mutable objects themselves, those objects can change on either x or its spread descendants, and the change will be visible across all of them. For this reason, it’s important to use object spreads on all the objects you’re modifying, like so:

const x = {
  a: 1,
  b: {
    c: 2,
    d: 3
  }
};

const y = {
  ...x,
  b: {
    ...x.b,
    e: 4
  }
};

// y == { a: 1, b: { c: 2, d: 3, e: 4 } }

Of course, if you’re working on really deep objects, it’s a good idea to break up expressions like this into functions that can address the deeper parts of the object. You could also use a library like lenses to decouple the deep object knowledge from your implementation.

Destructuring Objects and the Rest Pattern

The complement to spreading objects into each other is using the rest pattern in a destructuring assignment to pull selected things out of an object in one assignment.

If you’re not familiar with a destructuring assignment, here’s one that pulls out properties from an object into separate variables:

const x = { a: 1, b: 2, c: 3 };
const {a, b, c} = x;            // a == 1, b == 2, c == 3

When we bring the rest pattern into play, we can pull a out and create a new object to hold the rest of x:

const x = { a: 1, b: 2, c: 3 };
const {a, ...y} = x;            // a == 1, y == { b: 2, c: 3 }

y is useful here because it is an immutably-derived version of x that is missing the a property. We don’t have to do anything with a; if we let it go out of scope and return y, we’ll be returning a new object that would represent what x would be with a deleted, except without mutating x.

You don’t need to use the name of the property for the variable you pull out, either. Just give the property a right-hand side, and whatever you name will spring into existence:

const x = { a: 1, b: 2, c: 3 };
const {a: y, ...z} = x;         // y == 1, z == { b: 2, c: 3 }, a undefined

Array Spreads

Array spreads work very similarly to object spreads, but the place where you put the spread becomes more important.

const x = [1, 2, 3];
const y = [ ...x, 4, 5, 6 ]; // y == [ 1, 2, 3, 4, 5, 6 ];
const z = [ 0, ...x, 4, 5 ]; // z == [ 0, 1, 2, 3, 4, 5 ];

The position of the spread determines where the spread array’s contents will appear in the new array. You can spread the contents of an array as many times as you need to, and anywhere:

const x = [1, 2];
const y = [ 4, 5 ];
const z = [ 0, ...x, 3, ...y, 6 ]; // z = [ 0, 1, 2, 3, 4, 5, 6 ]

Just like array spreads, object spreads are shallow. The original array still points to the same things, and now the new array points to those same things. Any mutation of those things will be visible in both arrays.

Destructuring Arrays and the Rest Pattern

Arrays can be destructured just like objects:

const x = [ 1, 2 ];
const [ y, z ] = x; // y == 1, z == 2

We can use the rest pattern to pull out the rest of an array:

const x = [ 1, 2, 3, 4, 5 ];
const [ y, ...z ] = x;       // y == 1, z == [ 2, 3, 4, 5 ]

We can’t, however, use the rest pattern quite as flexibly with arrays as we can with objects. A rest must be the last part of a destructuring array assignment—so we can’t pull everything until the last element in an array, for example. If our needs are too complicated to use destructuring and the rest pattern, we’ll have to resort to the Array API.

Function Call Spreads

Function call spreads are a great way to pass an array of arguments to a function that expects each argument to be passed in separately:

function x(a, b, c) {
  return a + b + c;
}

const y = [ 1, 2, 3 ];

x(...y); // returns 6

Much like array spreads, you can also use function call spreads positionally:

function x(a, b, c) {
  return a + b + c;
}

const y = [ 2, 3 ];

x(1, ...y); // returns 6

This particular pattern gets the most use when you’re writing adapters that can work on many different kinds of functions. It allows you to save off a list of arguments and actually call the function later, without using apply.

Rest Parameters

Just like rest, function call spreads are rest parameters, which let you collect a parameter list of arbitrary length without having to work with arguments. For example:

function x(...y) {
  // for x(1, 2, 3), y is an array [ 1, 2, 3 ]
  // we'll use reduce to sum it
  return a.reduce((accumulator, value) => accumulator + value);
}

x(1, 2, 3);       // returns 6
x(1, 2, 3, 4, 5); // returns 15

Since you can use this as the inverse of spreading into a function call, you can use it in an adapter that can capture whatever arguments come in for later application.

But it’s less useful outside that sphere, in my opinion. While it might be tempting to make a function that can simply process an endless list of arguments (as above), it’s clearer to just pass an array in, with the understanding that the entire array will be processed.

One more thing: You can split your function parameters between defined and rest parameters, subject to the same restriction for arrays—the rest parameter must be the last one:

function x(y, ...z) {
  return [y, z];
}

X(1, 2, 3); // returns [ 1, [ 2, 3 ] ]

Argument Destructuring

Bringing it all together, there’s one more useful thing you can do with functions: use destructuring to pull arguments out of objects on the way in.

function x({y, ...z}) {
  return [y, z];
}

x({ y: 1, z: 2, zz: 3 }); // returns [1, { z: 2, zz: 3 }]

Everything you’ve seen above for destructuring assignments works here, including array destructuring and the rest pattern. This can be pretty handy when you need to pull apart a tiny object. But beware, if you’re dealing with a large one, you may want to shift that destructure either into the interior of the function or forgo it entirely to avoid making your function header too dense.

Hopefully, you’ve found some useful new syntax to make your JavaScript code more readable and object manipulation more convenient.

This article originally appeared on Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

The security spectrum of curl | sh

The curl | sh pattern for installing software is much-maligned, and can definitely be used carelessly. But how bad it actually is a more nuanced question than you might think, a topic I wrote about over at Atomic Spin.

by Matt Behrens at March 18, 2019 07:57 PM

September 08, 2018

Whitemice Consulting

Reading BYTE Fields From An Informix Unload

Exporting records from an Informix table is simple using the UNLOAD TO command. This creates a delimited text file with a row for each record and the fields of the record delimited by the specified delimiter. Useful for data archive the files can easily be restored or processed with a Python script.

One complexity exists; if the record contains a BYTE (BLOB) field the contents are dumped hex encoded. This is not base64. To read these files take the hex encoded string value and decode it with the faux code-page hex: content.decode("hex")

The following script reads an Informix unload file delimited with pipes ("|") decoding the third field which was of the BYTE type.

rfile = open(ARCHIVE_FILE, 'r')
counter = 0
row = rfile.readline()
while row:
    counter += 1
    print(
        'row#{0} @ offset {1}, len={2}'
        .format(counter, rfile.tell(), len(row), )
    )
    blob_id, content, mimetype, filename, tmp_, tmp_ = row.split('|')
    content = content.decode("hex")
    print('  BLOBid#{0} "{1}" ({2}), len={3}'.format(
        blob_id, filename, mimetype, len(content)
    ))
    if mimetype == 'application/pdf':
        if '/' in filename:
            filename = filename.replace('/', '_')
        wfile = open('wds/{0}.{1}.pdf'.format(blob_id, filename, ), 'wb')
        wfile.write(content)
        wfile.close()

by whitemice at September 08, 2018 08:05 PM

May 29, 2018

Whitemice Consulting

Disabling Transparent Huge Pages in CentOS7

The THP (Transparent Huge Pages) feature of modern LINUX kernels is a boon for on-metal servers with a sufficiently advanced MMU. However they can also result in performance degradation and inefficiently memory use when enabled in a virtual machine [depending on the hypervisor and hosting provider]. See, for example "Use of large pages can cause memory to be fully allocated". If you are issues in a virtualized environment that point towards unexplained memory consumption it may be worthwhile to experiment with disabling THP in your guests. These are instructions for controlling the THP feature through the use of a SystemD unit.

Create the file /etc/systemd/system/disable-thp.service:

[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target

Enable the new unit:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

THP will now be disabled. However already allocated huge pages are still active. Rebooting the server is advised to bring up the services with THP disabled.

by whitemice at May 29, 2018 07:30 PM

May 06, 2018

Whitemice Consulting

Informix Dialect With CASE Derived Polymorphism

I ran into an interesting issue when using SQLAlchemy 0.7.7 with the Informix dialect. In a rather ugly database (which dates back to the late 1980s) there is a table called "xrefr" that contains two types of records: "supersede" and "cross". What those signify doesn't really matter for this issue so I'll skip any further explanation. But the really twisted part is that while a single field distinquishes between these two record types - it does not do so based on a consistent value. If the value of this field is "S" then the record is a "supersede", any other value (including NULL) means it is a "cross". This makes creating a polymorphic presentation of this schema a bit more complicated. But have no fear, SQLAlchemy is here!

When faced with a similar issue in the past, on top of PostgreSQL, I've created polymorphic presentations using CASE clauses. But when I tried to do this using the Informix dialect the generated queries failed. They raised the dreaded -201 "Syntax error or access violation" message.

The Informix SQLCODE -201 is in the running for "Most useless error message ever!". Currently it is tied with PHP's "Stack Frame 0" message. Microsoft's "File not found" [no filename specified] is no longer in the running as she is being held at the Hague to face war crimes charges.

Rant: Why do developers get away with such lazy error messages?

The original [failing] code that I tried looked something like this:

    class XrefrRecord(Base):
        __tablename__  = 'xrefr'
        record_id      = Column("xr_serial_no", Integer, primary_key=True)
        ....
        _supersede     = Column("xr_supersede", String(1))
        is_supersede   = column_property( case( [ ( _supersede == 'S', 1, ), ],
                                                else_ = 0 ) )

        __mapper_args__ = { 'polymorphic_on': is_supersede }   


    class Cross(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 0} 


    class Supsersede(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 1}

The generated query looked like:

      SELECT xrefr.xr_serial_no AS xrefr_xr_serial_no,
             .....
             CASE
               WHEN (xrefr.xr_supersede = :1) THEN :2 ELSE :3
               END AS anon_1
      FROM xrefr
      WHERE xrefr.xr_oem_code = :4 AND
            xrefr.xr_vend_code = :5 AND
            CASE
              WHEN (xrefr.xr_supersede = :6) THEN :7
              ELSE :8
             END IN (:9) &lt;--- ('S', 1, 0, '35X', 'A78', 'S', 1, 0, 0)

At a glance it would seem that this should work. If you substitute the values for their place holders in an application like DbVisualizer - it works.

The condition raising the -201 error is the use of place holders in a CASE WHEN structure within the projection clause of the query statement; the DBAPI module / Informix Engine does not [or can not] infer the type [cast] of the values. The SQL cannot be executed unless the values are bound to a type. Why this results in a -201 and not a more specific data-type related error... that is beyond my pay-grade.

An existential dilemma: Notice that when used like this in the projection clause the values to be bound are both input and output values.

The trick to get this to work is to explicitly declare the types of the values when constructing the case statement for the polymorphic mapper. This can be accomplished using the literal_column expression.

    from sqlalchemy import literal_column

    class XrefrRecord(Base):
        _supersede    = Column("xr_supersede", String(1))
        is_supersede  = column_property( case( [ ( _supersede == 'S', literal_column('1', Integer) ) ],
                                                   else_ = literal_column('0', Integer) ) )

        __mapper_args__     = { 'polymorphic_on': is_supersede }

Visually if you log or echo the statements they will not appear to be any different than before; but SQLAlchemy is now binding the values to a type when handing the query off to the DBAPI informixdb module.

Happy polymorphing!

by whitemice at May 06, 2018 08:23 PM

Sequestering E-Mail

When testing applications one of the concerns is always that their actions don't effect the real-world. One aspect of that this is sending e-mail; the last thing you want is the application you are testing to send a paid-in-full customer a flurry of e-mails that he owes you a zillion dollars. A simple, and reliable, method to avoid this is to adjust the Postfix server on the host used for testing to bury all mail in a shared folder. This way:

  • You don't need to make any changes to the application between production and testing.
  • You can see the message content exactly as it would ordinarily have been delivered.

To accomplish this you can use Postfix's generic address rewriting feature; generic address rewriting processes addresses of messages sent [vs. received as is the more typical case for address rewriting] by the service. For this example we'll rewrite every address to shared+myfolder@example.com using a regular expression.

Step#1

Create the regular expression map. Maps are how Postfix handles all rewriting; a match for the input address is looked for in the left hand [key] column and rewritten in the form specified by the right hand [value] column.

echo "/(.)/           shared+myfolder@example.com" &gt; /etc/postfix/generic.regexp

Step#2

Configure Postfix to use the new map for generic address rewriting.

postconf -e smtp_generic_maps=regexp:/etc/postfix/generic.regexp

Step#3

Tell Postfix to reload its configuration.

postfix reload

Now any mail, to any address, sent via the hosts' Postfix service, will be driven not to the original address but to the shared "myfolder" folder.

by whitemice at May 06, 2018 08:11 PM

April 22, 2018

Whitemice Consulting

LDAP extensibleMatch

One of the beauties of LDAP is how simply it lets the user or application perform searching. The various attribute types hint how to intelligently perform searches such as case sensitivity with strings, whether dashes should be treated as relevant characters in the case of phone numbers, etc... However, there are circumstances when you need to override this intelligence and make your search more or less strict. For example: in the case of case sensitivity of a string. That is the purpose of the extensibleMatch.

Look at this bit of schema:

attributetype ( 2.5.4.41 NAME 'name'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )
attributetype ( 2.5.4.4 NAME ( 'sn' 'surname' )
DESC 'RFC2256: last (family) name(s) for which the entity is known by'
SUP name )

The caseIgnoreMatch means that searches on attribute "name", or its descendant "sn" (used in the objectclass inetOrgPerson), are performed in a case insensitive manner. So...

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam sn=williams dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
SASL SSF: 128
SASL installing layers
# Adam Williams, People, Entities, SAM, whitemice.org
dn: cn=Adam Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org
# Michelle Williams, People, Entities, SAM, whitemice.org
dn: cn=Michelle Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org

... this search returns two objects where the sn value is "Williams" even though the search string was "williams".

If for some reason we want to match just the string "Williams", and not the string "williams" we can use the extensibleMatch syntax.

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam "(sn:caseExactMatch:=williams)" dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
search: 3
result: 0 Success
estate1:~ #

No objects found as both objects have "williams" with an initial capital letter.

Using extensibleMatch I was able to match the value of "sn" with my own preference regarding case sensitivity. The system for an extensibleMatch is "({attributename}:{matchingrule}:{filterspec})". This can be used inside a normal LDAP filter along with 'normal' matching expressions.

For more information on extensibleMatch see RFC2252 and your DSA's documentation [FYI: Active Directory is a DSA (Directory Service Agent), as is OpenLDAP, or

by whitemice at April 22, 2018 03:14 PM

Android, SD cards, and exfat

I needed to prepare some SD cards for deployment to Android phones. After formatting the first SD card in a phone I moved it to my laptop and was met with the "Error mounting... unknown filesystem type exfat" error. That was somewhat startling as GVFS gracefully handles almost anything you throw at it. Following this I dropped down to the CLI to inspect how the SD card was formatted.

awilliam@beast01:~> sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 62.5 GiB, 67109912576 bytes, 131074048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *     2048 131074047 131072000 62.5G  7 HPFS/NTFS/exFAT

Seeing the file-system type I guessed that I was missing support for the hack that is exFAT [exFAT is FAT tweaked use on large SD cards]. A zypper search exfat found two uninstalled packages; GVFS is principally an encapsulation of fuse that adds GNOME awesome into the experience - so the existence of a package named "fuse-exfat" looked promising.

I installed the two related packages:

awilliam@beast01:~> sudo zypper in exfat-utils fuse-exfat
(1/2) Installing: exfat-utils-1.2.7-5.2.x86_64 ........................[done]
(2/2) Installing: fuse-exfat-1.2.7-6.2.x86_64 ........................[done]
Additional rpm output:
Added 'exfat' to the file /etc/filesystems
Added 'exfat_fuse' to the file /etc/filesystems

I removed the SD card from my laptop, reinserted it, and it mounted. No restart of anything required. GVFS rules! At this point I could move forward with rsync'ing the gigabytes of documents onto the SD card.

It is also possible to initially format the card in the openSUSE laptop as well. Partition the card creating a partition of type "7" and then use mkfs.exfat to format the partition. Be careful to give each card a unique ID using the -n option.

awilliam@beast01:~> sudo mkfs.exfat  -n 430E-2980 /dev/mmcblk0p1
mkexfatfs 1.2.7
Creating... done.
Flushing... done.
File system created successfully.

The mkfs.exfat command is provided by the exfat-utils package; a filesystem-utils package exists for most (all?) supported file-ystems. These -utils packages provide the various commands to create, check, repair, or tune the eponymous file-ystem type.

by whitemice at April 22, 2018 02:34 PM

April 03, 2018

Whitemice Consulting

VERR_PDM_DEVHLPR3_VERSION_MISMATCH

After downloading a Virtualbox ready ISO of OpenVAS the newly created virtual machine to host the instance failed to start with an VERR_PDM_DEVHLPR3_VERSION_MISMATCH error. The quick-and-dirty solution was to set the instance to use USB 1.1. This setting is changed under Machine -> Settings -> USB -> Select USB 1.1 OHCI Controller.. After that change the instance now boots and runs the installer.

virtualbox-qt-5.1.34-47.1.x86_64
virtualbox-5.1.34-47.1.x86_64
virtualbox-host-kmp-default-5.1.34_k4.4.120_45-47.1.x86_64
kernel-default-4.4.120-45.1.x86_64
openSUSE 42.3 (x86_64)

by whitemice at April 03, 2018 12:21 PM

March 11, 2018

Whitemice Consulting

AWESOME: from-to Change Log viewer for PostgreSQL

Upgrading a database is always a tedious process - a responsible administrator will have to read through the Changelog for every subsequent version from the version ze is upgrading from to the one ze is upgrading to.

Then I found this! This is a Changelog viewer which allows you to select a from and a to version and shows you all the changelogs in between; on one page. You still have to read it, of course, but this is a great time saver.

by whitemice at March 11, 2018 01:15 AM

January 17, 2018

Whitemice Consulting

Discovering Informix Version Via SQL

It is possible using the dbinfo function to retrieve the engine's version information via an SQL command:

select dbinfo('version','full') from sysmaster:sysdual

which will return a value like:

IBM Informix Dynamic Server Version 12.10.FC6WE
Tags: 

by whitemice at January 17, 2018 08:56 PM

October 09, 2017

Whitemice Consulting

Failure to apply LDAP pages results control.

On a particular instance of OpenGroupware Coils the switch from an OpenLDAP server to an Active Directory service - which should be nearly seamless - resulted in "Failure to apply LDAP pages results control.". Interesting, as Active Directory certainly supports paged results - the 1.2.840.113556.1.4.319 control.

But there is a caveat! Of course.

Active Directory does not support the combination of the paged control and referrals in some situations. So to reliably get the page control enable it is also necessary to disable referrals.

...
dsa = ldap.initialize(config.get('url'))
dsa.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
dsa.set_option(ldap.OPT_REFERRALS, 0)
....

Disabling referrals is likely what you want anyway, unless you are going to implement referral following. Additionally, in the case of Active Directory the referrals rarely reference data which an application would be interested in.

The details of Active Directory and pages results + referrals can be found here

by whitemice at October 09, 2017 03:03 PM

August 31, 2017

Whitemice Consulting

opensuse 42.3

Finally got around to updating my work-a-day laptop to openSUSE 42.3. As usual I did an in-place distribution update via zypper. This involves replacing the previous version repositories with the current version repositories - and then performing a dup. And as usual the process was quick and flawless. After a reboot everything just-works and I go back to doing useful things. This makes for an uninteresting BLOG post, which is as it should be.

zypper lr --url
zypper rr http-download.opensuse.org-f7da6bb3
zypper rr packman
zypper rr repo-non-oss
zypper rr repo-oss
zypper rr repo-update-non-oss
zypper rr repo-update-oss
zypper rr server:mail
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/non-oss/ repo-non-oss
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ repo-oss
zypper ar http://download.opensuse.org/repositories/server:/mail/openSUSE_Leap_42.3/ server:mail
zypper ar http://download.opensuse.org/update/leap/42.3/non-oss/ repo-update-non-oss
zypper ar http://download.opensuse.org/update/leap/42.3/oss/ repo-update-oss
zypper ar http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.3 packman
zypper lr --url  # double check
zypper ref  # refesh
zypper dup --download-in-advance  # distribution update
zypper up  # update, just a double check
reboot

Done.

by whitemice at August 31, 2017 12:49 PM

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://pdc.example.com'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        LDAP_ROOT_DN, ldap.SCOPE_SUBTREE,
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    )
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))

    pdc.unbind()

by whitemice at June 06, 2017 12:11 AM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit adam@example.com
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials

Huh.

[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'dn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O http://epel.mirror.freedomvoice.com/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: libjson.so.0()(64bit) for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libcfitsio.so.0()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libspatialite.so.2()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
...

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"

Celebrate

PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616

Step#5

Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM

Deduplicating with group_by, func.min, and having

You have a text file with four million records and you want to load this data into a table in an SQLite database. But some of these records are duplicates (based on certain fields) and the file is not ordered. Due to the size of the data loading the entire file into memory doesn't work very well. And due to the number of records doing a check-at-insert when loading the data is also prohibitively slow. But what does work pretty well is just to load all the data and then deduplicate it. Having an auto-increment record id is what makes this possible.

class VendorSKU(scratch_base):
    __tablename__ = 'sku'
    id      = Column(Integer, primary_key=True, autoincrement=True)
...

Once all the data gets loaded into the table the deduplication is straight-forward using minimum and group by.

query = scratch.query(
    func.min( VendorCross.id ),
    VendorCross.sku,
    VendorCross.oem,
    VendorCross.part ).filter(VendorCross.source == source).group_by(
        VendorCross.sku,
        VendorCross.oem,
        VendorCross.part ).having(
            func.count(VendorCross.id) > 1 )
counter = 0
for (id, sku, oem, part, ) in query.all( ):
    counter += 1
    scratch.query(VendorCross).filter(
        and_(
            VendorCross.source == source, 
            VendorCross.sku == sku,
            VendorCross.oem == oem,
            VendorCross.part == part,
            VendorCross.id != id ) ).delete( )
    if not (counter % 1000):
        # Commit every 1,000 records, SQLite does not like big transactions
        scratch.commit()
scratch.commit()

This incantation removes all the records from each group except for the one with the lowest id. The trick for good performance is to batch many deletes into each transaction - only commit every so many [in this case 1,000] groups processed; just also remember to commit at the end to catch the deletes from the last iteration.

by whitemice at January 24, 2017 07:45 PM

AIX Printer Migration

There are few things in IT more utterly and completely baffling than the AIX printer subsystem.  While powerful it accomplishes its task with more arcane syntax and scattered settings files than anything else I have encountered. So the day inevitably comes when you face the daunting task of copying/recreating several hundred print queues from some tired old RS/6000 we'll refer to as OLDHOST to a shiny new pSeries known here as NEWHOST.  [Did you know the bar Stellas in downtown Grand Rapids has more than 200 varieties of whiskey on their menu?  If you've dealt with AIX's printing subsystem you will understand the relevance.] To add to this Sisyphean task the configuration of those printers have been tweaked, twiddled and massaged individually for years - so that rules out the wonderful possibility of saying to some IT minion "make all these printers, set all the settings exactly the same" [thus convincing the poor sod to seek alternate employment, possibly as a bar-tender at the aforementioned Stellas].

Aside: Does IBM really truly not provide a migration technique?  No. Seriously, yeah. 

But I now present to you the following incantation [to use at your own risk]:

scp root@OLDHOST:/etc/qconfig /etc/qconfig
stopsrc -cg spooler
startsrc -g spooler
rsync --recursive --owner --group --perms \
  root@OLDHOST:/var/spool/lpd/pio/@local/custom/ \
  /var/spool/lpd/pio/@local/custom/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/dev/ \
  /var/spool/lpd/pio/@local/dev/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/ddi/ \
  /var/spool/lpd/pio/@local/ddi/
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*
enq -d
cd  /var/spool/lpd/pio/@local/custom
for FILE in `ls`
 do
   /usr/lib/lpd/pio/etc/piodigest $FILE 
 done
chown root:printq /var/spool/lpd/pio/@local/custom/*
chown root:printq /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*

Execute this sequence on NEWHOST and the print queues and their configurations will be "migrated". 

NOTE#1: This depends on all those print queues being network attached printers.  If the system has direct attached printers that correspond to devices such as concentrators, lion boxes, serial ports, SCSI buses,.... then please do not do this, you are on your own.  Do not call me, we never talked about this.

NOTE#2: This will work once.  If you've then made changes to printer configuration or added/removed printers do not do it again.  If you want to do it again first delete ALL the printers on NEWHOST.  Then reboot, just to be safe.  At least stop and start the spooler service after deleting ALL the printer queues.

NOTE#3: I do not endorse, warranty, or stand behind this method of printer queue migration.  It is probably a bad idea.  But the entire printing subsystem in AIX is a bad idea, sooo.... If this does not work do not call me; we never talked about this.

by whitemice at January 24, 2017 11:46 AM

The source files could not be found.

I have several Windows 2012 VMs in a cloud environment and discovered I am unable to install certain roles / features. Attempting to do so fails with an "The source files could not be found." error. This somewhat misleading messages indicates Windows is looking for the OS install media. Most of the solutions on the Interwebz for working around this error describe how to set the server with an alternate path to the install media ... problem being that these VMs were created from a pre-activated OVF image and there is no install media available from the cloud's library.

Lacking install media the best solution is to set the server to skip the install media and grab the files from Windows Update.

  1. Run "gpedit.msc"
  2. "Local Computer Policy"
  3. "Administrative Templates"
  4. "System"
  5. Enable "Specify settings for optional component installation and component repair"
  6. Check the "Contact Windows Update directory to download repair content instead of Windows Server Update Services (WSUS)"

Due to technical limitations WSUS cannot be utilized for this purpose; which is sad given that there is a WSUS server sitting in the same cloud. :(

by whitemice at January 24, 2017 11:31 AM

October 03, 2016

Whitemice Consulting

Playing With Drive Images

I purchased a copy of Windows 10 on a USB thumbdrive. I chose to have media to have (a) a backup and (b) not to have to bother with downloading a massive image. Primarily this copy of Windows will be used in VirtualBox for testing, using Power Shell, and other tedious system administrivia. First thing when it arrived is I used dd to make a full image of thumbdrive so I could tuck it away in a safe place.

dd if=/dev/sde of=Windows10.Thumbdrive.20160918.dd bs=512

But now the trick is to take that raw image and convert it to a VMDK so that it can be attached to a virtual machine. The VBoxManage command provides this functionality:

VBoxManage internalcommands createrawvmdk -filename Windows10.vmdk -rawdisk Windows10.Thumbdrive.20160918.dd

Now I have a VMDK file. If you do this you will notice the VMDK file is small - it is essentially a pointer to the disk image; the purpose of the VMDK is to provide the meta-data necessary to make the hypervisor (in this case VirtualBox) happy. Upshot of that is that you cannot delete the dd image as it is part of your VMDK.

Note that this dd file is a complete disk image; including the partition table:

awilliam@beast01:/vms/ISOs> /usr/sbin/fdisk -l Windows10.Thumbdrive.20160918.dd
Disk Windows10.Thumbdrive.20160918.dd: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device                            Boot Start      End  Sectors  Size Id Type
Windows10.Thumbdrive.20160918.dd 1 *     2048 30277631 30275584 14.4G  c W95 FAT3

So if I wanted to mount that partition on the host operating system I can do that my calculating the offset and mounting through loopback. The offset to the start of the partition within the drive image is the start multiplied by the sector size: 512 * 2,048 = 1048576. The mount command provides support for offset mounting:

beast01:/vms/ISOs $ sudo mount -o loop,ro,offset=1048576 Windows10.Thumbdrive.20160918.dd /mnt
beast01:/vms/ISOs # ls /mnt
83561421-11f5-4e09-8a59-933aks71366.ini  boot     bootmgr.efi  setup.exe                  x64
autorun.inf                              bootmgr  efi          System Volume Information  x86
beast01:/vms/ISOs $ sudo umount /mnt

If all I wanted was the partition, and not the drive, the same offset logic could be used to lift the partition out of the image into a distinct file:

dd if=Windows10.Thumbdrive.20160918.dd of=Windows10.image bs=512 skip=2048

The "Windows10.image" file could be mounted via loopback without bothering with an offset. It might however be more difficult to get a virtual host to boot from a FAT partition that does not have a partition table.

by whitemice at October 03, 2016 10:43 AM

September 15, 2016

Whitemice Consulting

Some Informix DATETIME/INTERVAL Tips

Determine the DATE of the first day of the current week.

(SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)

Informix always treats Sunday as day 0 of the week. The WEEKDAY function returns the number of the day of the week as a value of 0 - 6 so subtracting the weekday from current day (TODAY) returns the DATE value of Sunday of the current week.

Determining HOURS between two DATETIME values.

It is all about the INTERVAL data type and its rather odd syntax.

SELECT mpr.person_id, mpr.cn_name, 
  ((SUM(out_time - in_time))::INTERVAL HOUR(9) TO HOUR) AS hours
FROM service_time_card stc
  INNER JOIN morrisonpersonr mpr ON (mpr.person_id = stc.technician_id)
WHERE mpr.person_id IN (SELECT person_id FROM branch_membership WHERE branch_code = 'TSC')
  AND in_time > (SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)  
GROUP BY 1,2

The "(9)" part of the expression INTERVAL HOUR(9) TO HOUR is key - it allocates lots of room for hours, otherwise any value of more than a trivial number of hours will cause the clearly correct by not helpful SQL -1265 error: "Overflow occurred on a datetime or interval operation". As, in my case I had a highest value of 6,483 hours I needed at least HOUR(4) TO HOUR to avoid the overflow error. HOUR(9) is the maximum - an expression of HOUR(10) results in an unhelpful generic SQL -201: "A syntax error has occurred.". On the other hand HOURS(9) is 114,155 years and some change, so... it is doubtful that is going to be a problem in most applications.

by whitemice at September 15, 2016 07:46 PM

August 28, 2015

Ben Rousch's Cluster of Bleep

Kivy – Interactive Applications and Games in Python, 2nd Edition Review

I was recently asked by the author to review the second edition of “Kivy – Interactive Applications in Python” from Packt Publishing. I had difficulty recommending the first edition mostly due to the atrocious editing – or lack thereof – that it had suffered. It really reflected badly on Packt, and since it was the only Kivy book available, I did not want that same inattention to quality to reflect on Kivy. Packt gave me a free ebook copy of this book in exchange for agreeing to do this review.

At any rate, the second edition is much improved over the first. Although a couple of glaring issues remain, it looks like it has been visited by at least one native English speaking editor. The Kivy content is good, and I can now recommend it for folks who know Python and want to get started with Kivy. The following is the review I posted to Amazon:

This second edition of “Kivy – Interactive Applications and Games in Python” is much improved from the first edition. The atrocious grammar throughout the first edition book has mostly been fixed, although it’s still worse than what I expect from a professionally edited book. The new chapters showcase current Kivy features while reiterating how to build a basic Kivy app, and the book covers an impressive amount material in its nearly 185 pages. I think this is due largely to the efficiency and power of coding in Python and Kivy, but also to the carefully-chosen projects the author selected for his readers to create. Despite several indentation issues in the example code and the many grammar issues typical of Packt’s books, I can now recommend this book for intermediate to experienced Python programmers who are looking to get started with Kivy.

Chapter one is a good, quick introduction to a minimal Kivy app, layouts, widgets, and their properties.

Chapter two is an excellent introduction and exploration of basic canvas features and usage. This is often a difficult concept for beginners to understand, and this chapter handles it well.

Chapter three covers events and binding of events, but is much denser and difficult to grok than chapter two. It will likely require multiple reads of the chapter to get a good understanding of the topic, but if you’re persistent, everything you need is there.

Chapter four contains a hodge-podge of Kivy user interface features. Screens and scatters are covered well, but gestures still feel like magic. I have yet to find a good in-depth explanation of gestures in Kivy, so this does not come as a surprise. Behaviors is a new feature in Kivy and a new section in this second edition of the book. Changing default styles is also covered in this chapter. The author does not talk about providing a custom atlas for styling, but presents an alternative method for theming involving Factories.

In chapter six the author does a good job of covering animations, and introduces sounds, the clock, and atlases. He brings these pieces together to build a version of Space Invaders, in about 500 lines of Python and KV. It ends up a bit code-dense, but the result is a fun game and a concise code base to play around with.

In chapter seven the author builds a TED video player including subtitles and an Android actionbar. There is perhaps too much attention paid to the VideoPlayer widget, but the resulting application is a useful base for creating other video applications.

by brousch at August 28, 2015 01:16 AM

August 06, 2015

Whitemice Consulting

Cut-N-Paste Options Greyed Out In Excel

Yesterday I encountered a user who could not cut-and-paste in Microsoft Excel. The options to Cut, Copy, and Paste where disabled - aka 'greyed out' - in the menus. Seems like an odd condition.

The conclusion is that Excel's configuration had become corrupted. Resolution involves exiting Excel, deleting Excel's customized configuration, and then restarting the application. Lacking a set of configuration files the application regenerates a new default configuration and cut-and-paste functionality is restored.

Excel stores its per-user configuration in XLB files in the %%PROFILEDIR%%\AppData\Roaming\Microsoft\Excel folder. Navigate to this folder and delete all the XLB files - with all Microsoft Office applications shutdown.

After resolving this issue I found a more user approachable solution - no diddling in the file-system - but with Excel now working I was not able to verify it [and I do not know how to deliberately corrupt Excel's configuration].

  1. Right click on a sheet tab and select "View Code"
  2. From the "View" menu select "Immediate Window" if it's not already displayed.
  3. Paste the following into the "Immediate Window" and press enter: Commandbars("Cell").Reset

Of course, deleting the per-user configuration in Excel will delete the user's customizations.

by whitemice at August 06, 2015 11:06 AM

May 19, 2015

Whitemice Consulting

Which Application?

Which application manages this type of file? How can I, by default, open files of type X with application Y? These questions float around in GNOME forums and mailing lists on a regular basis.

The answer is: gvfs-mime .

To determine what application by default opens a file of a given type, as well as what other applications are installed which register support for that file-type, use the --query option, like:

awilliam@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': org.gnome.gedit.desktop
Registered applications:
    geany.desktop
    org.gnome.gedit.desktop
    calc.desktop
    ghex.desktop
    wine-extension-txt.desktop
    monodevelop.desktop
    writer.desktop
Recommended applications:
    geany.desktop
    org.gnome.gedit.desktop

Applications register support for document types using the XDG ".desktop" standard, and the default application is stored per-user in the file $XDG_DATA_HOME/applications/mimeapps.list. In most cases $XDG_DATA_HOME is $HOME/.local/share [this is the value, according to the spec, when the XDG_DATA_HOME environment variable is not set].

Not only can gvfs-mime query the association database it can be used, by the user, to set their default handler - simpler than attempting to discover the right object to right-click.

awilliam@@GNOMERULEZ:~> gvfs-mime --set text/x-python geany.desktop
Set geany.desktop as the default for text/x-python
awilliam@@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': geany.desktop
Registered applications:
    geany.desktop
    org.gnome.gedit.desktop
    calc.desktop
    ghex.desktop
    wine-extension-txt.desktop
    monodevelop.desktop
    writer.desktop
Recommended applications:
    geany.desktop
    org.gnome.gedit.desktop

Python files are now, by default, handled by Geany.

by whitemice at May 19, 2015 11:12 AM

May 07, 2015

Ben Rousch's Cluster of Bleep

My Farewell to GRMakers

Many of you have seen the recent board resignations and are wondering what the heck is going on over at GR Makers. We each have our own experiences, and I will set out mine here. It is a long story, but I think you deserve to hear it, so you can draw your own conclusions. I encourage you to reply to me personally (brousch@gmail.com) or via the comments on this blog post if you’d like to provide clarifications or additions to what I have to say.

I joined GR Makers not so much to make things, but to have an excuse to hang out with the most interesting group of people I’d ever met. That group started as half a dozen open source enthusiasts gathering at weekly Linux user group meetings at coffee shops, and grew to a much larger, more diverse, and eclectic gathering of developers, inventors, designers, electronics hackers, and much more thanks to Casey DuBois’ welcoming personality, non-judgemental inclusiveness, and networking prowess. A part of what brought the group together was an unstructured openness that made everyone feel like they had a say in what we were doing. When the group grew too large to continue meeting in Casey’s garage, several regulars looked around for ways of keeping the group together and growing in other locations.

Mutually Human Software offered a physical space and monetary support to keep the group together, but we had to change how the group was run. Since MHS was providing so many resources, they would own the group. There was a large meeting to decide if this was the way we wanted to go. The opinions were divided, but in the end we had to take this deal or disband the group because we’d have nowhere to meet. Casey took a job with MHS, and over the course of two years we slowly became a real makerspace. Casey continued to make connections between GR Makers, companies who donated equipment and supplies, and the community. The Socials became bigger, and so did the space.

As we grew, communication became a problem. If you didn’t attend the weekly socials and talk to Casey in person, you had no idea what was going on. Even those of us who were regularly there had no idea about how the makerspace was being run. An opaque layer existed between the community, and those who actually owned and made decisions affecting the group. Even basic questions from paying members would go unanswered when submitted to the official communication channel. Were we making money? How many members were there? Who are the owners? Is there a board, and if so, who is on it? Who is actually making decisions and how are those decisions being reached? Are our suggestions being seen and considered by these people?

Despite these issues, several interesting initiatives and projects came out of the community and makerspace: the Exposed ArtPrize Project, GR Young Makers, The Hot Spot, and most recently Jim Winter-Troutwine’s impressive sea kayak. I enjoyed the community, and wanted to see it continue to thrive.

I thought the communication problem was problem was one of scale: there was a large community and only a few people running things. I assumed those in charge were simply overwhelmed by the work required to keep everyone informed. In an attempt to fix this problem, I volunteered to write a weekly newsletter which I hoped would act as a conduit for the leadership to inform those who were interested. I asked for a single piece of information when I started the newsletter: a list of board members and what their roles were. I did not receive this information, but went ahead anyways, thinking that it would be sorted out soon. I gathered interesting information by visiting the space and talking to the community at the Socials each week and put it into a digestible format, but still that simple piece of information was refused me. Each newsletter was approved by Samuel Bowles or Mark Van Holstyn before it was sent, sometimes resulting in a delay of days and occasionally resulting in articles being edited by them when they did not agree with what I had written.

Shortly after the first few editions of the newsletter, Casey and Mutually Human parted ways. My conversations with the people who formed that initial core of what became GR Makers revealed a much more systemic problem in the leadership than I had realized. There was indeed a board, made up of those people I talked to. They passed on concerns and advice from themselves and the members to the owners, but that’s all they were allowed to do. The board had no real power or influence, and it turns out that it had never had any. The decisions were being made by two people at MHS who held the purse strings, and even this advisory board was often kept in the dark about what was being decided.

This cauldron of problems finally boiled over and were made public at a town hall meeting on March 25, 2015. Over the course of a week, the advisory board and the owners held a series of private meetings and talked for hours to try to keep GR Makers together. Concessions and public apologies were made on both sides and an agreement was reached which seemed to satisfied nearly everyone. In short, it was promised that the leadership would give the board more powers and would become more transparent about finances, membership, and decision making. This link leads to my summary of that town hall meeting, and a nearly identical version of those notes went out in an approved edition of the newsletter.

The community was relieved that the makerspace we had worked so hard to create was not going collapse, and I assumed that the board was being empowered. Bob Orchard was added to the advisory board and kept and published minutes from the board meetings – something which had not been done previously. These minutes always mentioned requests for the changes that had been agreed upon at the Town Hall, but action on those requests was always delayed. At the board meeting on April 29, the requests were finally officially denied. The minutes from that board meeting can be found here. Most of the board members – including all of the founders of that initial group in Casey’s garage – resigned as a result of this meeting.

It is up to each of us to decide if GR Makers as it exists today meets our desires and needs. There are still good people at GR Makers, but that initial group of interesting people has left. Without them I find very little reason to continue contributing. The ownership structure of GR Makers was an educational and enlightening experiment, but it is not what I want to be a part of. I think the openness and transparency that formed the backbone of that group which became GR Makers is gone, and I don’t think it is coming back. So it is with a heavy heart that I am resigning my membership.

But do not despair. That initial group of friends – that sociable collection of connectors, hackers, inventors, and makers – and a few new faces we’ve picked up along the way, have been talking together. We want to start over with a focus on the community and ideals that existed in the gatherings at Casey’s garage. It may be a while before we have a stable space to meet and tools for people to use, but I hope you’ll join us when we’re ready to try again. If you’d like to be kept up to date on this group, please fill out this short form.

by brousch at May 07, 2015 11:16 PM