Planet GRPUG

January 25, 2023

Whitemice Consulting

Converting MBOX to ZIP

Task: Converting an MBOX format export of a mailbox into a ZIP file containing each message as a file named after the message-id of the email message. Every e-mail client worth a pinch of salt can export messages, or a mailbox, to an MBOX file.

import mailbox
import zipfile
from email.Parser import Parser
from email.generator import Generator
from tempfile import NamedTemporaryFile

def get_scratch_file():
    tmp = NamedTemporaryFile(
        mode='w+b',
        suffix='.data',
    )
    return tmp

if __name__ == '__main__':

    mbox = mailbox.mbox('mailbox.mbox')  # The MBox file to read
    wfile = open('mailbox.zip', 'wb')  # The ZIP file to create

    zfile = zipfile.ZipFile(wfile, 'a', compression=zipfile.ZIP_DEFLATED, )

    messages = dict()
    counter = 0
    for message in mbox:
        counter += 1
        message_id = message['Message-ID'].strip()[1:-1]  # remove the beginning "<"  & ">" from the Message-ID
        filename = '{0}-{1}.mbox'.format(counter, message_id, ).replace('/', '')  # remove any filesystem separators from the Message-ID
        print(filename)
        sfile = get_scratch_file()
        g = Generator(sfile, mangle_from_=False, maxheaderlen=255, )
        g.flatten(message)
        sfile.flush()
        sfile.seek(0)
        zfile.write(sfile.name, arcname=filename, )
        sfile.close()

    zfile.close()
    wfile.close()

by whitemice at January 25, 2023 04:03 PM

March 10, 2022

Ben Rousch's Cluster of Bleep

Gootloader infection cleaned up

Dear blog owner and visitors,

This blog had been infected to serve up Gootloader malware to Google search victims, via a common tactic known as SEO (Search Engine Optimization) poisioning. Your blog was serving up 381 malicious pages. Your blogged served up malware to 493 visitors.

I tried my best to clean up the infection, but I would do the following:

  • Upgrade WordPress to the latest version (one way the attackers might have gained access to your server)
  • Upgrade all WordPress themes to the latest versions (another way the attackers might have gained access to your server)
  • Upgrade all WordPress plugins (another way the attackers might have gained access to your server), and remove any unnecessary plugins.
  • Verify all users are valid (in case the attackers left a backup account, to get back in)
  • Change all passwords (for WordPress accounts, FTP, SSH, database, etc.) and keys. This is probably how the attackers got in, as they are known to brute force weak passwords
  • Run antivirus scans on your server
  • Block these IPs (5.8.18.7 and 89.238.176.151), either in your firewall, .htaccess file, or in your /etc/hosts file, as these are the attackers command and control servers, which send malicious commands for your blog to execute
  • Check cronjobs (both server and WordPress), aka scheduled tasks. This is a common method that an attacker will use to get back in. If you are not sure, what this is, Google it
  • Consider wiping the server completly, as you do not know how deep the infection is. If you decide not to, I recommend installing some security plugins for WordPress, to try and scan for any remaining malicious files. Integrity Checker, WordPress Core Integrity Checker, Sucuri Security,
    and Wordfence Security, all do some level of detection, but not 100% guaranteed
  • Go through the process for Google to recrawl your site, to remove the malcious links (to see what malicious pages there were, Go to Google and search site:your_site.com agreement)
  • Check subdomains, to see if they were infected as well
  • Check file permissions

Gootloader (previously Gootkit) malware has been around since 2014, and is used to initally infect a system, and then sell that access off to other attackers, who then usually deploy additional malware, to include ransomware and banking trojans. By cleaning up your blog, it will make a dent in how they infect victims. PLEASE try to keep it up-to-date and secure, so this does not happen again.

Sincerly,

The Internet Janitor

Below are some links to research/further explaination on Gootloader:

https://news.sophos.com/en-us/2021/03/01/gootloader-expands-its-payload-delivery-options/

https://news.sophos.com/en-us/2021/08/12/gootloaders-mothership-controls-malicious-content/

https://www.richinfante.com/2020/04/12/reverse-engineering-dolly-wordpress-malware

https://blog.sucuri.net/2018/12/clever-seo-spam-injection.html

This message

by brousch at March 10, 2022 02:26 PM

October 05, 2021

Whitemice Consulting

0x0000011b

KB5005573 broke our M$-Windows due to Windows' broken printing subsystem and Microsoft's refusal to migrate to Open solutions such as IPP & cupsd. Suddenly M$-Windows clients were failing to connect to printers with an extremely helpful and illuminating error code of 0x0000011b.

This relates to a fix Microsoft released in yet another attempt to close cavernous security holes in SPOOLSS (the Windows printing subsystem - since they don't use cupsd); this week the security issue in question is the aptly named "PrintNightmare"

Setting a registry key on our M$-Windows print server allowed clients to print again. Importantly it does this by disabling the fix related to "PrintNightmare"; on the other hand it allows Windows printing to work.

[HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Print]
"RpcAuthnLevelPrivacyEnabled"=dword:00000000

See also: How to fix the Windows 0x0000011b network printing error

by whitemice at October 05, 2021 12:43 PM

April 30, 2021

Whitemice Consulting

Upgrading A Cisco AP To Autonomous

Upgrading a Cisco AP, in this case a LAP1142N, from "lightweight" [not very useful] more to "Autonomous" [useful] mode. This assume the access point has been reset to factory defaults. For this example the AP is being upgraded to c1140-k9w7-mx.153-3.JBB.tar which is available on a tftp service @ 172.31.7.125.

1.) Connect to AP to the ethernet network and connect the workstation/laptop to the console port.
2.) Start minicom and get a connection. A port speed of 9.600 8N1 should work.
3.) When the AP begins to load its default image press Ctri-A, F to send the break. This should interrupt the boot process and drop into ROMMON, the sp: prompt means you won.

flashfs[0]: flashfs fsck took 17 seconds.
Reading cookie from system serial eeprom...Done
Base Ethernet MAC address: e8:b7:48:ac:23:48
Ethernet speed is 1000 Mb - FULL duplex
Loading "flash:/c1140-rcvk9w8-mx/c1140-rcvk9w8-mx"...###################################
Error loading "flash:/c1140-rcvk9w8-mx/c1140-rcvk9w8-mx"

Interrupt within 5 seconds to abort boot process.
Boot process terminated.

The system is unable to boot automatically.  The BOOT
environment variable needs to be set to a bootable
image.

C1140 Boot Loader (C1140-BOOT-M) Version 12.4(23c)JA, RELEASE SOFTWARE (fc3)
Technical Support: http://www.cisco.com/techsupport
Compiled Tue 01-Jun-10 12:52 by prod_rel_team

ap:

4,) configure the ethernet connection and bring the AP online.
ap: set IP_ADDR 192.168.37.144
ap: set NETMASK 255.255.255.0
ap: set DEFAULT_ROUTER 192.168.37.19
ap: tftp_init
ap: ether_init
ap: flash_init
Initializing Flash... ...The flash is already initialized.

5.) Untar the desired software version onto the AP's flash. This step may take a moment.
ap: tar -xtract tftp://172.31.7.125/c1140-k9w7-mx.153-3.JBB.tar flash:
extracting info (280 bytes)
c1140-k9w7-mx.153-3.JBB (directory) 0 (bytes)
extracting c1140-k9w7-mx.153-3.JBB/c1140-k9w7-mx.153-3.JBB (119277 bytes).........................
c1140-k9w7-mx.153-3.JBB/html (directory) 0 (bytes)
c1140-k9w7-mx.153-3.JBB/html/level (directory) 0 (bytes)
...
ap:

6.) Set the boot image.
ap: set BOOT flash://c1140-k9w7-mx.153-3.JBB/c1140-k9w7-xx.153-3.JBB

7.) Reboot!
ap: boot

The AP should reboot to the powerful new software. Once the reload is complete be sure to double check.

ap>show version
Cisco IOS Software, C1140 Software (C1140-K9W7-M), Version 15.3(3)JBB, RELEASE SOFTWARE (fc1)
Technical Support: http://www.cisco.com/techsupport
Copyright (c) 1986-2015 by Cisco Systems, Inc.
Compiled Mon 04-May-15 18:53 by prod_rel_team

ROM: Bootstrap program is C1140 boot loader
BOOTLDR: C1140 Boot Loader (C1140-BOOT-M) Version 12.4(23c)JA, RELEASE SOFTWARE (fc3)

ap uptime is 5 minutes
System returned to ROM by reload
System image file is "flash://c1140-k9w7-mx.153-3.JBB/c1140-k9w7-xx.153-3.JBB"

8.) Configure, drink đŸș, order 🌼s.

by whitemice at April 30, 2021 01:11 PM

December 11, 2020

Whitemice Consulting

Linting JSON On The Command Line

JSON is a strange format [I'm not a fan]. Opening a large JSON file in many text editors is unfruitful when the file is one long line - they will burn CPU trying to line wrap the data.

JSON however can be easily linted on the command line, producing a more friendly file.

cat onelongline.json | python -m json.tool > linted.json

And the file linted.json is readable and friendlier with text editors.

Tags: 

by whitemice at December 11, 2020 01:41 PM

December 10, 2020

Whitemice Consulting

Postfix IPv4 Only

I have a postfix SMTP relay buried deep in a network behind proxy servers, all the infrastructure [sadly] is IPv4 only. This works, yet one ends up with many log messages like:

connect to smtp.office365.com[2603:1036:304:2857::2]:587: Network is unreachable

The server attempts if IPv6 result from the DNS lookup first. So let's make postfix use IPv4 only.

postconf -e inet_protocols=ipv4

That's it! No more "unreachable" log messages.

BTW, the default value of inet_protocols is "all". Set it back to that value to re-enable IPv6.

by whitemice at December 10, 2020 07:58 PM

December 04, 2020

Whitemice Consulting

Virtual Box Start Error (VNC)

I went to start my Windows XP virtual machine, after something like ~4 years. And it failed to start with an 0x80004005 error: "Could not find the VirtualBox Report Desktop Extension library." Hmmm, that's strange.

Turns out that the extensions loaded by the Windows XP VM uses the library libvncserver which was no longer installed on the host. Unfortunately the VirtualBox Extensions are not integrated into the distribution's package manager.


awilliam@bestia:~> sudo zypper install libvncclient0 libvncserver0 Loading repository data... Reading installed packages... Resolving package dependencies... The following 2 NEW packages are going to be installed: libvncclient0 libvncserver0 2 new packages to install. Overall download size: 206.5 KiB. Already cached: 0 B. After the operation, additional 480.8 KiB will be used. Continue? [y/n/v/...? shows all options] (y): y Retrieving package libvncclient0-0.9.10-lp152.9.8.1.x86_64 (1/2), 73.6 KiB (159.0 KiB unpacked) Retrieving: libvncclient0-0.9.10-lp152.9.8.1.x86_64.rpm ................[done] Retrieving package libvncserver0-0.9.10-lp152.9.8.1.x86_64 (2/2), 132.8 KiB (321.8 KiB unpacked) Retrieving: libvncserver0-0.9.10-lp152.9.8.1.x86_64.rpm ..............[done (2.2 MiB/s)] Checking for file conflicts: ...............................................................[done] (1/2) Installing: libvncclient0-0.9.10-lp152.9.8.1.x86_64 .................[done] (2/2) Installing: libvncserver0-0.9.10-lp152.9.8.1.x86_64 ...............[done] awilliam@bestia:~>

The Windows XP VM now boots up normally! :) Also, it is phenomenal how fast Windows XP was/is compared to current Windows operating systems.

by whitemice at December 04, 2020 01:21 PM

September 09, 2020

Whitemice Consulting

Dropping An Element In An Iterative Parse

Using lxml's etree to iteratively parse an XML document and I wanted to drop a specific element from the stream...

        for event, element in etree.iterparse(self.rfile, events=("end",)):
            if (event == 'end') and (element.tag == 'row'):
                self.wfile.write(etree.tostring(element))
            elif (event == 'end') and (element.tag == name_of_element_to_drop):
                element.getparent().remove(element) # drop element

The secret sauce is: element.getparent().remove(element)

Document is a "StandardXML" document, like:

<ResultSet>
   <row> 
       ... elements...
  </row>
  ...
</ResultSet>

by whitemice at September 09, 2020 06:06 PM

July 14, 2020

zigg.com (Matt Beherens' blog)

The drain of self-advocacy

It seems it’s International Non-binary People’s Day. Which is cool.

I had no idea it was today until I saw the posts. What can I say? I haven’t flipped through the gay agenda in awhile.

The post that brought it to my attention contained one of the many articles that queer organizations publish on these sorts of days to help people learn how to be a better ally.

“Great!” I thought. “I can share this! And people can pass it around and learn some important things!”

But I hesitated.

I haven’t talked about this much outside a few passing comments, but self-advocacy is exhausting. I feel like I’m always saying “look at me!” I’m imagining all the people who don’t show engagement (in any way I can see, at least) are rolling their eyes at yet another self-centered Mattie post.

Which, this is one of those, isn’t it? Hah.

Eventually I settled on sharing it with a statement that the post in question is good to share around! (Hint, hint.) I explicitly approve it, thinking about all the times I’ve had the desire to support a marginalized group I’m not part of, but being unsure
 would an actual member of this marginalized group find this helpful? Or is it someone else’s idea of what’s good for a marginalized group? Or


Anyway
 I’m hoping I headed that off. I am careful not to continue to post prescriptive things for the rest of the day.

I used to be a much stronger self-advocate. I think it was the newness of being out, maybe? Coming to a waypoint in my gender journey that felt like it finally had some sharable clarity to it?

But along the way, I just got tired. There’s a lot of deafening silence, perhaps from people who are afraid they’ll hurt me somehow. There’s a lot of apology, which I know doesn’t come from a bad place, but violently grabs hold of my empathy anyway, draining me, leading me to feel like I should be making someone else feel better.

There’s even been some pushback. The memories of those first few times I ran up against that and broke down as a result
 I still have scars from those. Scars I’m not keen on putting out in front of me once again.

I find myself desperately wishing that cisgender people, who don’t have their own skin in the game, would educate themselves, take up the sword here, and fight. I find myself wanting to define “cisgender” nonetheless, for those of you who maybe somehow have not managed to hear it yet, and exhaustion weighs on my shoulders yet again.

There’s one huge, huge bright spot in all this. I have a colleague who reached out to me after hearing me misgendered so many times in our day-to-day work—they wanted to help, but wanted to make sure they were doing it in a way that would be helpful! I value them so much and I hope they know that.

I wish everyone was like that. I’d still need to expend some energy, but
 it’d be to help someone understand how they can spread the advocacy fire. Someone who isn’t non-binary could wish me a happy International Non-binary People’s Day, drop some knowledge on the rest of the world, help address problems one-on-one, that sort of thing.

There’s an in-joke going around about how it’s our birthday today. Which, you know, is kind of apt when you think about all this. I don’t wish myself a happy birthday. I wish you a happy birthday. And I bring you gifts.

In that vein, my birthday wish today is not tolerance with its endless self-advocacy, not acceptance with its disinterest, but celebration—the joy you find in me being me, and the desire to share and defend that follows that joy.

by Mattie Behrens at July 14, 2020 12:42 PM

May 24, 2020

Whitemice Consulting

Installing The Zoom Client On openSUSE 15.1

Uh oh, in a default-ish GNOME install of openSUSE 15.1 there are a couple of unmatched / unclaimed dependencies. It appears Zoom Inc. did not try very hard when drafting the spec for their LINUX clients.

awilliam@linux-tozb:~/Downloads> rpm -Uvh zoom_openSUSE_x86_64.rpm 
warning: zoom_openSUSE_x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 61a7c71d: NOKEY
error: Failed dependencies:
    libxcb-xtest.so.0()(64bit) is needed by zoom-5.0.408598.0517_openSUSE-1.x86_64
    ibus-m17n is needed by zoom-5.0.408598.0517_openSUSE-1.x86_64

Let's try the obvious...

awilliam@linux-tozb:~/Downloads> sudo zypper in libxcb-xtest0
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following NEW package is going to be installed:
  libxcb-xtest0

1 new package to install.
Overall download size: 17.7 KiB. Already cached: 0 B. After the operation,
additional 10.1 KiB will be used.
Continue? [y/n/v/...? shows all options] (y): y
Retrieving package libxcb-xtest0-1.13-lp151.3.2.x86_64
                                     (1/1),  17.7 KiB ( 10.1 KiB unpacked)
Retrieving: libxcb-xtest0-1.13-lp151.3.2.x86_64.rpm ................[done]

Checking for file conflicts: .......................................[done]
(1/1) Installing: libxcb-xtest0-1.13-lp151.3.2.x86_64 ..............[done]

awilliam@linux-tozb:~/Downloads> sudo zypper in ibus-m17n
Loading repository data...
Reading installed packages...
Resolving package dependencies...

The following 5 NEW packages are going to be installed:
  ibus-m17n libm17n0 libotf0 m17n-db m17n-db-lang

The following recommended package was automatically selected:
  m17n-db-lang

5 new packages to install.
Overall download size: 1.6 MiB. Already cached: 0 B. After the operation,
additional 6.9 MiB will be used.
Continue? [y/n/v/...? shows all options] (y): y
Retrieving package libotf0-0.9.13-lp151.2.3.x86_64
                                     (1/5),  47.6 KiB ( 86.3 KiB unpacked)
Retrieving: libotf0-0.9.13-lp151.2.3.x86_64.rpm ....................[done]
Retrieving package m17n-db-1.7.0-lp151.2.1.noarch
                                     (2/5),   1.3 MiB (  6.2 MiB unpacked)
Retrieving: m17n-db-1.7.0-lp151.2.1.noarch.rpm .........[done (7.8 KiB/s)]
Retrieving package m17n-db-lang-1.7.0-lp151.2.1.noarch
                                     (3/5),  17.1 KiB ( 23.0 KiB unpacked)
Retrieving: m17n-db-lang-1.7.0-lp151.2.1.noarch.rpm ................[done]
Retrieving package libm17n0-1.7.0-lp151.2.3.x86_64
                                     (4/5), 240.8 KiB (596.5 KiB unpacked)
Retrieving: libm17n0-1.7.0-lp151.2.3.x86_64.rpm ....................[done]
Retrieving package ibus-m17n-1.3.4-lp151.2.4.x86_64
                                     (5/5),  31.6 KiB ( 69.8 KiB unpacked)
Retrieving: ibus-m17n-1.3.4-lp151.2.4.x86_64.rpm ...................[done]

Checking for file conflicts: .......................................[done]
(1/5) Installing: libotf0-0.9.13-lp151.2.3.x86_64 ..................[done]
(2/5) Installing: m17n-db-1.7.0-lp151.2.1.noarch ...................[done]
(3/5) Installing: m17n-db-lang-1.7.0-lp151.2.1.noarch ..............[done]
(4/5) Installing: libm17n0-1.7.0-lp151.2.3.x86_64 ..................[done]
(5/5) Installing: ibus-m17n-1.3.4-lp151.2.4.x86_64 .................[done]

And what happens now?

awilliam@linux-tozb:~/Downloads> rpm -Uvh zoom_openSUSE_x86_64.rpm 
warning: zoom_openSUSE_x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 61a7c71d: NOKEY
error: can't create transaction lock on /usr/lib/sysimage/rpm/.rpm.lock (Permission denied)
awilliam@linux-tozb:~/Downloads> sudo rpm -Uvh zoom_openSUSE_x86_64.rpm 
warning: zoom_openSUSE_x86_64.rpm: Header V4 RSA/SHA1 Signature, key ID 61a7c71d: NOKEY
Preparing...                                                            (1################################# [100%]
Updating / installing...
   1:zoom-5.0.408598.0517_openSUSE-1                                    ( ################################# [100%]
run post install script, action is 1...

Installed; and it works.

by whitemice at May 24, 2020 06:52 PM

April 20, 2020

Whitemice Consulting

gEdit's Amazing External Tools

In a few recent conversations I have become aware of an unawareness - an unawareness of the awesome that is gedit's best feature: External Tools. External Tools allow you to effortlessly link the power of the shell, Python, or whatever into an otherwise already excellent text editor yielding maximum awesome. External Tools, unlike some similar features in many IDEs is drop-dead simple to use - you do not need to go somewhere and edit files, etc... you can create and use them without ever leaving the gedit UI.


Plugins tab of the Preferences dialog.

To enable External Tools [which is a plugin - as is nearly every feature in gedit] go to the Plugin tab of Preferences dialog and check the box for "External Tools". External Tools is now active. Close the dialog and proceed in defining the tools useful to you.

With External Tools enabled there will be a "Manage External Tools..." option in the global menu. When in the Tools menu not there is also an "External Tools" submenu - every external tool you define will be available in the menu, automatically. The list of defined tools in that submenu will also include whatever hot-key you may have bound to the tool - as you likely will not remember at first.


Manage External Tools Dialog

Within the Manage External Tools dialog you can start defining what tools are useful to you. For myself the most useful feature is the ability to perform in-place transformations of the current document; to accomplish this set input to "Current Document" and Output to "Replace Current Document". With that Input & Output the current document is streamed to your defined tool as standard input and the standard output from the tool replaces the document. Don't worry - Undo [Ctrl-Z] still works if your tool did not do what you desired.

What are some useful External Tools? That depends on what type of files and data you deal with on a regular basis. I have previously written a post about turning a list of value into an set format - that is useful for cut-n-paste into either an SQL tool [for use as an IN clause] or into a Python editor [for x=set(....)]. That provides a simple way to take perhaps hundreds of rows and get them into data very simply.

Otherwise some tools I find useful are:

Format JSON to be nicely indented

#!/bin/sh
python -m json.tool

Use input/output settings to replace current document.

Open a terminal in the directory of the document

#!/bin/sh
gnome-terminal --working-directory=$GEDIT_CURRENT_DOCUMENT_DIR &amp;

Set the input/ouput for this action to "Nothing"

Remove leading spaces from lines

#!/bin/sh
sed 's/^[[:blank:]]*//'

Use input/output settings to replace current document.

Remove trailing spaces from lines

#!/bin/sh
sed 's/[[:blank:]]*$//'

Use input/output settings to replace current document.

Keep only unique lines of the file

#!/bin/sh
sort | uniq

Use input/output settings to replace current document.

Format an XML file with nice indentation

#!/bin/sh
xmllint --format - -

Use input/output settings to replace current document.

IN Clause Generator

This takes a document with one value per line and converts it to an SQL like IN clause. The output is also appropriate for creating Python set values.

#!/usr/bin/env python
import sys

iteration = 0
line_length = 0
text = sys.stdin.readline()
while (text !=  ''):
  text = text.strip()
  if (len(text) > 0):
    if (iteration == 0):
      sys.stdout.write('(')
    else:
      sys.stdout.write(', ') 
    if (line_length > 74):
      sys.stdout.write('\n ')
      line_length = 0
    if (len(text) > 0):
      sys.stdout.write('\'{0}\''.format(text))
    line_length = line_length + len(text) + 4
    iteration = iteration + 1
  text = sys.stdin.readline()
sys.stdout.write(')')  
sys.stdout.flush()

Input is "Current document", output is "Replace current document".

Tags: 

by whitemice at April 20, 2020 05:21 PM

January 03, 2020

zigg.com (Matt Beherens' blog)

On supporting a friend

I've been thinking this morning about the nature of support, and how we can offer it to our loved ones.

I think this an unfortunately really common thought pattern: in order to offer support, we have to take an active role in another's life. We have to make our loved ones' endeavors our own, we have to literally take part, right? Otherwise, the thinking goes that we're not being supportive.

But if we don't enjoy the thing, if we don't feel that personal pull, if we are personally worn-out, if our hearts are not there, is that actually support at all? Are we sacrificing a part of ourselves, tearing ourselves up inside and giving our loved ones a tattered piece of paper that says “support” that says more about how we hurt than how we love them?

There's a little mental exercise I do often, using my dear friends as foils. It goes like this: instead of asking what they would want me to do (which is colored by my own negative feelings of self), if the tables were turned, what would I want them to do? The answer is clear and rings true: I would rather see them care for themselves, do what makes them happy, and share that with me.

The recipe for good support, then, isn't that I necessarily engage directly with what makes a loved one happy—unless doing so personally brings me joy. The recipe is simply this: that I draw happiness from the fact that they are doing something they love, enjoying something, believing in a thing deep in their heart.

Of course, if we do also legitimately find joy in sharing something with a friend, we do find that shared experiences bring us closer, bring our hearts together. But you can't force hearts together to get that; they need to share a bond of mutual enjoyment. If you aren't into it, don't force yourself—rather, take joy in your loved one's joy and share that instead. That will also bring you closer together, without tearing either of you up in the process.

Consider sharing what you love with those you love, and consider that you don't need to be experiencing it right alongside them to be a good friend. You can just share in their joy. And that, right there, makes you a good friend—and I believe your loved ones would say the same.

by Mattie Behrens at January 03, 2020 08:43 PM

November 25, 2019

Whitemice Consulting

Uncoloring ls (2019)

This is an update from "Uncoloring ls" which documents how to disable colored ls output on older systems which define that behavior in a profile.d script.

Some more recent systems load the colorization rules in a more generalized fashion. The load still occurs from a profile.d script, typically ls.bash, but mixed in with other functionality related to customizing the shell.

The newer profile.d script looks first for $HOME/.dir_colors, and if not found looks for /etc/DIR_COLORS.

To disable colorized ls for a specific user create an empty .dir_colors file.

touch $HOME/.dir_colors

Or to disable it for all users make the /etc/DIR_COLORS files not exist.

sudo mv /etc/DIR_COLORS /etc/DIR_COLORS.disabled

by whitemice at November 25, 2019 06:33 PM

October 21, 2019

Whitemice Consulting

PostgreSQL: "UNIX Time" To Date

In some effort to avoid time-zone drama, or perhaps due to fantasies of efficiency, some developer put a date-time field in a PostgreSQL database as an integer; specifically as a UNIX Time value. ¯\_(ツ)_/¯

How to present this as a normal date in a query result?

date_trunc('day', (TIMESTAMP 'epoch' + (j.last_modified * INTERVAL '1 second'))) AS last_action,

This is the start of the epoch plus the value in seconds - UNIX Time - calculated and cast as a non-localized year-month-day value.

Clarification#1: j is the alias of the table in the statement's FROM.

Clarification#2: last_modified is the field which is an integer time value.

by whitemice at October 21, 2019 01:36 PM

September 11, 2019

Whitemice Consulting

PostgreSQL: Casted Indexes

Dates in databases are a tedious thing. Sometimes a time value is recorded as a timestamp, at other times - probably in most cases - it is recorded as a date. Yet it can be useful to perform date-time queries using a representation of time distinct from what is recorded in the table. For example a database which records timestamps, but I want to look-up records by date.

To this end PostgreSQL supports indexing a table by a cast of a field.

Create A Sample

testing=> CREATE TABLE tstest (id int, ts timestamp);
CREATE TABLE
testing=> INSERT INTO TABLE tstest (1,'2018-09-01 12:30:16');
INSERT 0 1
testing=> INSERT INTO TABLE tstest (1,'2019-09-02 10:30:17');
INSERT 0 1

Create The Index

Now we can use the "::" operator to create an index on the ts field, but as a date rather than a timestamp.

testing=> create index tstest_tstodate on dtest ((ts::date));
CREATE INDEX

Testing

Now, will the database use this index? Yes, provided we cast ts as we do in the index.

testing=>SET ENABLE_SEQSCAN=off;
SET
testing=> EXPLAIN SELECT * FROM tstest WHERE ts::date='2019-09-02';
                                 QUERY PLAN                                  
-----------------------------------------------------------------------------
 Index Scan using tsest_tstodate on tstest  (cost=0.13..8.14 rows=1 width=12)
   Index Cond: ((ts)::date = '2019-09-02'::date)
(2 rows)

For demonstration it is necessary to disable sequential scanning, ENABLE_SEQSCAN=off, otherwise with a table this small the PostgreSQL will never use any index.

Casting values in an index can be a significant performance win when you frequently query data in a form differing than its recorded form.

Tags: 

by whitemice at September 11, 2019 03:09 PM

August 30, 2019

Whitemice Consulting

Listing Printer/Device Assignments

The assignment of print queues to device URIs can be listed from a CUPS server using the "-v" option.

The following authenticates to the CUPS server cups.example.com as user adam and lists the queue and device URI relationships.

[user@host ~]# lpstat -U adam -h cups.example.com:631 -v | more
device for brtlm1: lpd://cismfp1.example.com/lp
device for brtlp1: socket://lpd02914.example.com:9100
device for brtlp2: socket://LPD02369.example.com:9100
device for brtmfp1: lpd://brtmfp1.example.com/lp
device for btcmfp1: lpd://btcmfp1.example.com/lp
device for cenlm1: lpd://LPD04717.example.com/lp
device for cenlp: socket://LPD02697.example.com:9100
device for cenmfp1: ipp://cenmfp1.example.com/ipp/
device for ogo_cs_sales_invoices: cups-to-ogo://attachfs/399999909/${guid}.pdf?mode=file&pa.cupsJobId=${id}&pa.cupsJobUser=${user}&pa.cupsJobTitle=${title}
device for pdf: ipp-to-pdf://smtp
...

by whitemice at August 30, 2019 07:36 PM

Reprinting Completed Jobs

Listing completed jobs

By default the lpstat command lists the queued/pending jobs on a print queue. However the completed jobs still present on the server can be listed using the "-W completed" option.

For example, to list the completed jobs on the local print server for the queue named "examplep":

[user@host] lpstat -H localhost -W completed examplep
examplep-8821248         ogo             249856   Fri 30 Aug 2019 02:17:14 PM EDT
examplep-8821289         ogo             251904   Fri 30 Aug 2019 02:28:04 PM EDT
examplep-8821290         ogo             253952   Fri 30 Aug 2019 02:28:08 PM EDT
examplep-8821321         ogo             249856   Fri 30 Aug 2019 02:34:48 PM EDT
examplep-8821333         ogo             222208   Fri 30 Aug 2019 02:38:16 PM EDT
examplep-8821337         ogo             249856   Fri 30 Aug 2019 02:38:50 PM EDT
examplep-8821343         ogo             249856   Fri 30 Aug 2019 02:39:31 PM EDT
examplep-8821351         ogo             248832   Fri 30 Aug 2019 02:41:46 PM EDT
examplep-8821465         smagee            1024   Fri 30 Aug 2019 03:06:54 PM EDT
examplep-8821477         smagee          154624   Fri 30 Aug 2019 03:09:38 PM EDT
examplep-8821493         smagee          149504   Fri 30 Aug 2019 03:12:09 PM EDT
examplep-8821505         smagee           27648   Fri 30 Aug 2019 03:12:36 PM EDT
examplep-8821507         ogo             256000   Fri 30 Aug 2019 03:13:26 PM EDT
examplep-8821562         ogo             251904   Fri 30 Aug 2019 03:23:14 PM EDT

Reprinting a completed job

Once the job id is known, the far left column of the the lpstat output, the job can be resubmitted using the lp command.

To reprint the job with the id of "examplep-8821343", simply:

[user@host] lp -i examplep-8821343 -H restart

by whitemice at August 30, 2019 07:29 PM

Create & Deleting CUPs Queues via CLI

Create A Print Queue

[root@host ~]# /usr/sbin/lpadmin -U adam -h cups.example.com:631 -p examplelm1 -E \
  -m "foomatic:HP-LaserJet-laserjet.ppd" -D "Example Pick Ticket Printer"\
   -L "Grand Rapids" -E -v lpd://printer.example.com/lp

This will create a queue named examplelm1 on the host cups.example.com as user adam.

  • "-D" and "-L" specify the printer's description and location, respectively.
  • The "-E" option, which must occur after the "-h" and -p" options instructs CUPS to immediately set the new print queue to enabled and accepting jobs.
  • "-v" option specifies the device URI used to communicate with the actual printer.

The printer driver file "foomatic:HP-LaserJet-laserjet.ppd" must be a PPD file available to the print server. PPD files installed on the server can be listed using the "lpinfo -m" command:

[root@crew ~]# lpinfo -m | more
foomatic:Alps-MD-1000-md2k.ppd Alps MD-1000 Foomatic/md2k
foomatic:Alps-MD-1000-ppmtomd.ppd Alps MD-1000 Foomatic/ppmtomd
foomatic:Alps-MD-1300-md1xMono.ppd Alps MD-1300 Foomatic/md1xMono
foomatic:Alps-MD-1300-md2k.ppd Alps MD-1300 Foomatic/md2k
foomatic:Alps-MD-1300-ppmtomd.ppd Alps MD-1300 Foomatic/ppmtomd
...

The existence of the new printer can be verified by checking its status:

[root@host ~]# lpq -Pexamplelm1
examplelm1 is ready
no entries

The "-l" options of the lpstat command can be used to interrogate the details of the queue:

[root@host ~]# lpstat -l -pexamplelm1
printer examplelm1 is idle.  enabled since Fri 30 Aug 2019 02:56:11 PM EDT
    Form mounted:
    Content types: any
    Printer types: unknown
    Description: Example Pick Ticket Printer
    Alerts: none
    Location: Grand Rapids
    Connection: direct
    Interface: /etc/cups/ppd/examplelm1.ppd
    On fault: no alert
    After fault: continue
    Users allowed:
        (all)
    Forms allowed:
        (none)
    Banner required
    Charset sets:
        (none)
    Default pitch:
    Default page size:
    Default port settings:

Delete A Print Queue

A print queue can also be deleted using the same lpadmin command used to create the queue.

[root@host ~]# /usr/sbin/lpadmi -U adam -h cups.example.com:631  -x examplelm1
Password for adam on crew.mormail.com? 
lpadmin: The printer or class was not found.
[root@host ~]# lpq -Pexamplelm1
lpq: Unknown destination "examplelm1"!

Note that deleting the print queue appears to fail; only because the lpadmin command attempts to report the status of the named queue after the operation.

by whitemice at August 30, 2019 07:11 PM

August 25, 2019

zigg.com (Matt Beherens' blog)

Your candle

You have a candle. It has a beautiful flame, unique and in colors not often seen in this world.

You want everyone to share the joy you get from that candle, to understand where the flame comes from, to love its colors like you do.

But it’s not like any candle they’ve seen. And so you have to burn it brighter, hotter, really let them get a good look at it and the light it casts on your face, let them see you illuminated in its beauty.

Unfortunately, you only have the one candle. And when it’s spent, it’s spent.

It breaks your heart, but as you’ve watched that candle burn, you know
 you can’t just give it to everyone, share it with everyone. You can’t make everyone look at it. There just isn’t enough to go around. You’ll burn it down to your fingertips getting it bright enough to even get them to consider looking at it. You'll eventually not be able to show anyone anything.

Some people will know you have a beautiful light and they beg to see it. But they’re carrying their own candle and won’t put it down, so you’ll need to burn yours much more brightly for them to see it. You'll risk burning it down even faster.

Some people you desperately want to share the light with, people you want to tell of the joy it brings you. But they think it’s a strange color and complain they can’t see you well by its light. If only it were a yellow flame like their candles burned with. Then they could see. Why isn’t your flame yellow?

Some wave you away when you show up with your candle. We'll let you have it, but don't bring it too close, they say. It makes me uncomfortable.

Some want to extinguish your flame. There’s no place for that here, they say. It's unnatural.

A few people, though, have their own candles that burn in their own, unique, beautiful way—like but also wholly unlike yours—and you can just touch your candle to theirs, creating something new, a unity creating brand-new colors, never seen before, yet clearly composed of each of your flames.

And that’s when it’s just you two. You can add more, and more, and more. Each of you contributing your own quiet, small flame, never burning any of your candles too much, and yet creating a robust and glorious show of light and warmth and love.

Even as you stand there, making a delightful, colorful symphony of beauty, those who do not understand the beauty you have are grumbling, saying that you all should just get candles out of the boxes they brought. They all burn the same way, and look—there are so many, we will never run out. It will be much easier for you if you just burn these candles like us.

And you take a stand and say, no. I will not extinguish this beauty. I will delight in it, share it with those who can see it as it is, those who will put their own lights down, those who will defend its quiet beauty.

And maybe, just maybe, even though they have simple candles themselves, they can use what they have to illuminate the way. They can show everyone how they can put down their bright and brash fire. They can show everyone how to approach with love and understanding—forget themselves and shed their preconceptions of what a candle shaould look like. Look at what you have to show them.

Your beautiful flame.

by Mattie Behrens at August 25, 2019 11:05 AM

Review: GRIS

I haven't reviewed a game since 2011—my last was my review of Atsumete! Kirby (a.k.a. Kirby Mass Attack) for my old games media stomping grounds formerly known as N-Sider. But after playing Nomada Studio's GRIS this weekend, I felt like sitting down and writing because I have been moved in a way that I haven't been in a good while.

Nintendo has this great setup these days; if you wishlist a game on the Switch's eShop, you'll get an email when one goes on sale, which is great because perusing the eShop's games-on-sale list is charitably an exercise in “wow, there are a lot of games here that are not for me”. With the sale emails, I am thus freed from this responsibility. I wishlisted GRIS based on its launch trailer, which, goddamn, isn't that beautiful? And then I got the email and it was around ten bucks and I said “yes”.

I'll be brief about the premise: Gris, the blue-haired protagonist in the trailer, has lost their beautiful singing voice—and the game is about them working through that loss. There's nary a word apart from the unobtrusive achievements you'll unlock at various points (many of which I still have undone); the story is told through the changes in the world, the beautiful, beautiful soundtrack and art, and the layering of color. Their world has been shattered; they has their loss to cope with and their life to rebuild, and this will literally happen as you progress.

I was asked by a close friend who was actually a bit wary, wondering if GRIS could be a traumatic or triggering experience, with the main character going through a difficult loss. I don't believe it is. The striking visuals and music may make you tear up (oh hey, it's me); and there's plenty to read into the art and animation—colors representing strong emotion, the scenes of a world crumbled away, and at times fleeing from literally being swallowed by dark shapes—but it gets no more concrete than that. It's powerful without realizing the kinds of losses you may experience in the real world.

But it is moving, and in surprising ways. It feels almost clichĂ© to describe your progression through a video game and your unlocking of abilities as part of that as “empowering”, and yet that's literally what it is, with the game's design built hand-in-hand with its narrative. The abilities you gain and the mechanics you experience are aligned with Gris' journey, starting at the very beginning when Gris can barely move, slumping and collapsing instead of jumping, right through the end when acceptance gifts them the ability to give life to the world around them. Early on, I had the game pegged as (if you'll forgive me) a “basic indie platformer” without much finesse, only to find that by the end, Gris had become strong and fluid, moving through their world with ease and intent.

I found myself experiencing some artificially-induced anxiety by the numerous points of no return—especially as there are collectible items throughout the game I could often see but never reached before they were locked off behind me—but take heart; when you've completed the experience, you'll be able to go back to several points via a chapter select and give those another shot. I've only briefly experienced this so far, but I did find it rather interesting that replaying the opening chapter made me feel authentically powerless, instead of artificially like I find myself feeling when returning to beginning of most games.

It seems to me we are firmly in an era of games seeking to be art—not in that shallow way that an industry desperately reaching for respectability did a decade ago, but instead in a truly authentic way, drawn from experiences, realized around the human condition. Much like Gris at the end of their journey, I feel GRIS stands tall, confident, and strong in this pantheon. I know from years of experience watching video games that a studio making one amazing game doesn't mean their next will be the same, but I'm nonetheless finding myself desperately curious about what Nomada may make next. Even if they never make another game like this, GRIS moved me and I am grateful for that experience.

by Mattie Behrens at August 25, 2019 11:04 AM

July 25, 2019

Whitemice Consulting

Changing Domain Password

Uh oh, Active Directory password is going to expire!

Ugh, do I need to log into a Windows workstation to change by password?

Nope, it is as easy as:

awilliam@beast01:~> smbpasswd -U DOMAIN/adam  -r example.com
Old SMB password:
New SMB password:
Retype new SMB password:
Password changed for user adam

In this case DOMAIN is the NetBIOS domain name and example.com is the domain's DNS domain. One could also specify a domain controller for -r, however in most cases the bare base domain of an Active Directory backed network will resolve to the active collection of domain controllers.

by whitemice at July 25, 2019 03:29 PM

May 24, 2019

Whitemice Consulting

CRON Jobs Fail To Run w/PAM Error

Added a cron job to a service account's crontab using the standard crontab -e -u ogo command. This server has been chugging away for more than a year, with lots of stuff running within he service account - but nothing via cron.

Subsequently the cron jobs didn't run. :( The error logged in /var/log/cron was:

May 24 14:45:01 purple crond[18909]: (ogo) PAM ERROR (Authentication service cannot retrieve authentication info)

The issue turned out to be that the service account - which is a local account, not something from AD, LDAP, etc... - did not have a corresponding entry in /etc/shaddow. This breaks CentOS7's default PAM stack (specified in /etc/pam.d/crond). The handy utility pwck will fix this issue, after which I the jobs ran without error.

[root@purple ~]# pwck
add user 'ogo' in /etc/shadow? y
pwck: the files have been updated
[root@purple ~]# grep ogo /etc/shadow
ogo:x:18040:0:99999:7:::

by whitemice at May 24, 2019 08:09 PM

April 18, 2019

Whitemice Consulting

MySQL: Reporting Size Of All Tables

This is a query to report the number of rows and the estimated size of all the tables in a MySQL database:

SELECT 
  table_name, 
  table_rows, 
  ROUND(((data_length + index_length) / 1024 / 1024), 2) AS mb_size
FROM information_schema.tables
WHERE table_schema = 'maindb;

Results look like:

table_name                                  table_rows mb_size 
------------------------------------------- ---------- ------- 
mageplaza_seodashboard_noroute_report_issue 314314     37.56   
catalog_product_entity_int                  283244     28.92   
catalog_product_entity_varchar              259073     29.84   
amconnector_product_log_details             178848     6.02    
catalog_product_entity_decimal              135936     16.02   
shipperhq_quote_package_items               115552     11.03   
amconnector_product_log                     114400     767.00  
amconnector_productinventory_log_details    114264     3.52    

This is a very useful query as the majority of MySQL applications are poorly designed; they tend not to clean up after themseves.

by whitemice at April 18, 2019 06:30 PM

April 08, 2019

Whitemice Consulting

Informix: Listing The Locks

The current database locks in an Informix engine are easily enumerated from the sysmaster database.

SELECT 
  TRIM(s.username) AS user, 
  TRIM(l.dbsname) AS database, 
  TRIM(l.tabname) AS table,
  TRIM(l.type) AS type,
  s.sid AS session,
  l.rowidlk AS rowid
FROM sysmaster:syslocks l
  INNER JOIN sysmaster:syssessions s ON (s.sid = l.owner)
WHERE l.dbsname NOT IN('sysmaster')
ORDER BY 1; 

The results are pretty straight forward:

User Database Type Session ID Row ID
extranet maindb site_master IS 436320|0
shuber maindb workorder IS 436353|0
shuber maindb workorder IX 436353|0
shuber maindb workorder_visit IS 436353|0
extranet maindb customer_master IS 436364|0
jkelley maindb workorder IX 436379|0
jkelley maindb workorder IS 436379|0
mwathen maindb workorder IS 436458|0
Tags: 

by whitemice at April 08, 2019 08:10 PM

January 26, 2019

zigg.com (Matt Beherens' blog)

Can the macOS Disk Utility really erase an SSD?

Laptop computers, especially those with a lot of internal storage, are very convenient. In the same amount of physical space that a magazine would take up, we can carry an amazing amount of data with us and work with it anywhere. One flip-side of that benefit is that all that data remains inside that computer even after we’ve moved on to a new one, unless we take steps to erase it first.

With older laptops featuring spinning magnetic hard disk drives, a lengthy, random erase process was the best way to go. But that’s not true for modern MacBooks with their solid state drives; Apple has even removed the option. So how do we go about erasing these computers? And do those processes work?

Note: Since this article was first posted, there has been some confusion about the setup used. I’m using a MacBook Pro with its built-in SSD. I’m also running Disk Utility directly on the MacBook itself, not over Target Disk Mode. This process has always been YMMV, but particularly if your setup is different than mine, expect variations.

The Best Way

By far, the best way to keep your data secure is to use full-disk encryption, e.g. FileVault. Every bit of data you write to any disk after you’ve enabled FileVault on it is unreadable without the key, protecting it even if you lose the computer or it’s stolen.

Erasing the computer is now really easy, too. Everything on the computer is useless without the encryption key, so you simply need to erase the key itself. Since the key is cryptographically secured by your password, you just need to not sign into the computer—but you can also erase the encrypted key, too, with a simple disk erase.

But what if you didn’t use FileVault? Your disk is now full of data that could be sensitive. You’ll have to get rid of it somehow.

The Fallback Way

Apple recommends that, if you’re giving away or selling your Mac, you should simply erase it with Disk Utility first.

This advice puts people like myself, who have had long histories with hard drives and understand how they “delete” data—by leaving it around and just “losing track” of it—on high alert. If you just did a simple, quick erase on a hard drive ten years ago, any competent data recovery software would turn up a goldmine of data.

Erasing a disk the quick way in those days only put a new filesystem header on the front of the disk, like replacing the table of contents of a book with an empty one, but leaving the rest of the pages in the book intact. They did this for speed; overwriting all the data on a disk takes many hours. But it leaves a lot of data behind, which is why you’ll find plenty of articles advising how to use the macOS command line to force a hard-drive-style secure erase—where you overwrite it with random data many times—on a solid-state drive.

Thankfully, there’s a way that you can have a modern hard drive—old-style spinning or solid-state—erased very quickly, and securely. It’s a close cousin to full-disk encryption, and it’s called a secure erase.

A new drive that’s capable of secure erase has a random encryption key generated for it on day one. That key is kept on the drive, and all data written to it is encrypted with that key. When a secure erase is requested, that key is destroyed, leaving all the encrypted data unreadable.

Apple, being Apple, isn’t telling us (at least, not anywhere I can find) if their Disk Utility erase process is actually a secure erase. I decided to look into whether a Disk Utility erase does leave easy-to-read breadcrumbs behind, or whether it cleans up after itself.

Creating Some Data to Find

A disk—any disk—is basically a giant file, the size of the entire disk. The easiest way to look for data to be recovered on a disused disk is to scan it, beginning to end, and look for patterns that indicate useful data.

The first thing I needed to do to test this out was fill a disk with data I could easily find again. To do this, I took the Ann Arbor office loaner MacBook—recently erased from its last borrower—and half-filled its disk with a bunch of files.

(Warning: if you do this, you’re going fill your disk with junk—25,000 copies of a 4.6 megabyte file containing 100,000 copies of the phrase “The quick brown fox jumped over the lazy dog.”—enough to fill half a 256 gigabyte SSD, which was my goal.)

$ for n in `seq 100000`
> do
>   echo 'The quick brown fox jumped over the lazy dog.'
> done >template.txt
$ for n in `seq 25000`
> do
>   cp template.txt template_$n.txt
> done

That done, I verified that the disk space was actually taken up.

Now, to inspect the raw disk, I had to reboot; macOS doesn’t allow access to the raw disk device with standard Unix tools, even if you’re root. I also found out the macOS recovery partition didn’t have the tools I needed, so I booted Ubuntu instead.

Once in, the incantation to scan the disk—this will read the entire disk in 1 megabyte chunks, and pass it through a hex dump tool that we can use to visually inspect the data:

# dd if=/dev/sda2 bs=1024k | hexdump -C

And a large portion of the output—which I stopped, because it would take far too long to visually read the whole disk—looked like this:

Erase and Aftermath

If I were to do a naĂŻve erase of this disk by writing just a new filesystem header to the beginning, like most old-school disk erases did, the vast majority of this data would still be fully readable.

But I wasn’t planning on doing an old-school disk erase. My next step was to reboot into the macOS recovery partition and erase the disk with Disk Utility like Apple advises.

I didn’t bother reinstalling macOS into the newly-erased drive. It might overwrite some of the data if it hadn’t been completely erased, but it certainly wouldn’t overwrite all of it regardless. Opting to skip the install step entirely gave me the greatest chance to find any trace of the data.

Once erased, I rebooted into Ubuntu one more time, and ran the same command. The output was much shorter this time—I let it run to the end, seeing no trace of my data, but just this:

The middle is where our data would’ve been—it’s over 250 gigabytes of zeroes. Apple’s recommended erase procedure has, in the space of a few seconds, replaced all our old data with a big empty expanse of nothing.

Conclusion and Caveats

So what does this mean? This is exactly what I’d expect to see if Apple had, in fact, implemented a secure erase with Disk Utility, like we suspected. It means that whatever data you had before the erase is inaccessible to just about anyone who acquires your computer, which is great news for anyone who might want to grab a copy of Disk Drill and start digging.

It doesn’t mean that data is guaranteed to be gone, however. Unless we have evidence that Apple actually is secure-erasing the drive, there are processes by which more well-resourced adversaries could recover data—for example, if they were simply marking every part of the drive as “free”, it’s possible someone could convince the SSD to give up that data once again.

Given this, your safest bet is still to always use full-disk encryption on any MacBook. However, I think it’s reasonable to assume that unless your threat model includes adversaries who will spare no expense to recover your data, if you haven’t used FileVault, you don’t need to be anxious that data you wrote in the past to this computer is a problem.

My recommendation is this: use FileVault going forward, and make sure you give your computer a regular erase before you give it up.

This article originally appeared on Atomic Spin.

by Mattie Behrens at January 26, 2019 01:59 PM

Representing function properties in TypeScript

iWe’ve been using TypeScript on an Electron project. It’s been a huge win already—a little additional upfront investment gives us more confidence that our code is correct and reduces the chance that it will pass unexpectedly-shaped objects around, a source of many bugs in my past Node applications.

But sometimes, it’s not immediately clear how to type certain kinds of objects. You can, of course, represent these as any whenever you need to—but any any you rely on can weaken your code’s quality. Last week, I discovered another way to avoid falling back on that crutch, thanks to the power of TypeScript’s type system.

Electron applications rely on IPC to communicate between their main Node process and the renderer processes that present the user interface. Because our application uses IPC extensively, we decided to wrap Electron’s IPC libraries in a lightweight custom object that could emit log messages. This would allow us to trace IPC problems, and it could easily be replaced by a fake IPC implementation for unit testing.

To implement the logging of incoming IPC messages, we attached a wrapper function to Electron’s IPC library instead of the requested listener, like this:

ipcMain.on(channel, (event: Electron.IpcMainEvent, ...args: any[]): void => {
  console.log(`heard ${channel}`, args);
  listener(event, ...args);
});

This worked great until we needed to implement one new piece of functionality: removing a defunct listener.

I’m Not Listening

Removing a listener from an EventEmitter is important in a long-lived process, especially if you’re attaching listeners to a long-lived object like Electron’s IPC implementation.

If you fail to do this, you’ll not only be leaking memory by creating references that can’t be garbage-collected. You’ll also potentially be setting your application up for hard-to-trace bugs when zombie listeners you didn’t think were still around come roaring back to life.

If you’re simply listening to one event, solving this problem is fairly easy—just use .once instead of .on, and the EventEmitter will take care of it for you.

If you’ve got multiple listeners, though—like a pair of success and error listeners, one of which must remove the other, you must use .removeListener—and that requires a function reference to identify which listener to remove. Because we wrapped the real listener, we need to ask the EventEmitter to remove our wrapper, which we don’t have a reference to—and tracking it is an exercise in complexity that I’d rather not add to a wrapper class.

The solution I arrived at involved attaching a .wraps property to our wrapper functions, holding a reference to the listener function:

function wrapCallbackWithLogger(callback, message) {
  const listener = (event, ...args) => {
    console.log(message);
    callback(event, ...args);
  };
  listener.wraps = callback;
  return listener;
}

This allowed me to write code that would search the listeners attached to any particular IPC channel for the wrapper function wrapping the listener we were asked to remove:

const listenerToRemove =
  listeners.filter(candidate => candidate.wraps === wrappedListener)[0];

Unfortunately, none of this made TypeScript very happy. And that is as it should be; Functions don’t have wraps properties!

Declaring Our Intent to Wrap

The very first thing I needed to do was declare some types so that TypeScript would understand the shape of our wrapper function. The function I wanted to wrap was easy enough; Electron types already had IpcMainEventListener and IpcRendererEventListener for both sides of its IPC implementation. I decided to write my own generic listener type:

declare type IpcEventListener<E> = (event: E, ...args: any[]) => void;

Now that I had this type, I could extend it with the .wraps property easily:

interface WrappedIpcEventListener<E> extends IpcEventListener<E> {
  wraps: IpcEventListener<E>;
}

Building the object was a bit trickier. In my original, TypeScript inferred listener as a basic callback for the IPC event listener, so it wouldn’t allow me to add the wraps property, and the basic callback didn’t satisfy WrappedIpcEventListener. The solution turned out to be doing it all in one step:

function wrapCallbackWithLogger<E>(
  callback: (event: E, ...args: any[]) => void,
  message: string
): WrappedIpcEventListener<E> {
  return Object.assign(
    (event: E, ...args: any[]) => {
      console.log(message);
      callback(event, ...args);
    },
    {wraps: callback}
  );
}

Object.assign was the final ingredient to making the wrapping work—it took the wrapper callback and a new object containing just the wraps property. The result matched the WrappedIpcEventListener interface perfectly.

Making the filtering work required a little cast (as the listeners method on EventEmitter returns Array<Function>), but I was comfortable with it. If a candidate function didn’t have a wraps property, it would return undefined, never matching the listener we want to remove:

const listenerToRemove: WrappedIpcEventListener<E> =
  (listeners as Array<WrappedIpcEventListener<E>>)
    .filter(candidate => candidate.wraps === wrappedListener)[0];

With all this in place, the TypeScript compiler is happy, and we’re happy because we keep our extraordinarily useful IPC wrapper.

by Mattie Behrens at January 26, 2019 01:45 PM

Spreading the spread and rest love

JavaScript’s spread syntax has proven to be an extremely useful tool while working with immutable data structures as part of a React/Redux project.

Now that it’s widely available for objects in LTS Node 8 (as it has been for some time for other runtimes via TypeScript), it’s interesting to go back and take a look at all it can do.

Object Spreads

In our codebase, object spreads get the most use by far. They look like this:

const x = { a: 1, b: 2 };
const y = { ...x, c: 3 }; // y == {a: 1, b: 2, c: 3}

Using spread syntax, we expressed that y, a brand new object, should be composed of all of x’s properties and values, with c added to it. Most crucially, x is not modified at all—it is exactly the same object, untouched, as it always was.

Not modifying x satisfies a requirement for shallow immutability—that is, we know that if we keep a reference to x, it still has exactly the same property list that it always had, and none of its properties will point to any new objects. But we now also have y, which is x, but subtly changed.

It’s important to remember what shallow immutability doesn’t give us, though. Notably, if any of x’s properties are mutable objects themselves, those objects can change on either x or its spread descendants, and the change will be visible across all of them. For this reason, it’s important to use object spreads on all the objects you’re modifying, like so:

const x = {
  a: 1,
  b: {
    c: 2,
    d: 3
  }
};

const y = {
  ...x,
  b: {
    ...x.b,
    e: 4
  }
};

// y == { a: 1, b: { c: 2, d: 3, e: 4 } }

Of course, if you’re working on really deep objects, it’s a good idea to break up expressions like this into functions that can address the deeper parts of the object. You could also use a library like lenses to decouple the deep object knowledge from your implementation.

Destructuring Objects and the Rest Pattern

The complement to spreading objects into each other is using the rest pattern in a destructuring assignment to pull selected things out of an object in one assignment.

If you’re not familiar with a destructuring assignment, here’s one that pulls out properties from an object into separate variables:

const x = { a: 1, b: 2, c: 3 };
const {a, b, c} = x;            // a == 1, b == 2, c == 3

When we bring the rest pattern into play, we can pull a out and create a new object to hold the rest of x:

const x = { a: 1, b: 2, c: 3 };
const {a, ...y} = x;            // a == 1, y == { b: 2, c: 3 }

y is useful here because it is an immutably-derived version of x that is missing the a property. We don’t have to do anything with a; if we let it go out of scope and return y, we’ll be returning a new object that would represent what x would be with a deleted, except without mutating x.

You don’t need to use the name of the property for the variable you pull out, either. Just give the property a right-hand side, and whatever you name will spring into existence:

const x = { a: 1, b: 2, c: 3 };
const {a: y, ...z} = x;         // y == 1, z == { b: 2, c: 3 }, a undefined

Array Spreads

Array spreads work very similarly to object spreads, but the place where you put the spread becomes more important.

const x = [1, 2, 3];
const y = [ ...x, 4, 5, 6 ]; // y == [ 1, 2, 3, 4, 5, 6 ];
const z = [ 0, ...x, 4, 5 ]; // z == [ 0, 1, 2, 3, 4, 5 ];

The position of the spread determines where the spread array’s contents will appear in the new array. You can spread the contents of an array as many times as you need to, and anywhere:

const x = [1, 2];
const y = [ 4, 5 ];
const z = [ 0, ...x, 3, ...y, 6 ]; // z = [ 0, 1, 2, 3, 4, 5, 6 ]

Just like array spreads, object spreads are shallow. The original array still points to the same things, and now the new array points to those same things. Any mutation of those things will be visible in both arrays.

Destructuring Arrays and the Rest Pattern

Arrays can be destructured just like objects:

const x = [ 1, 2 ];
const [ y, z ] = x; // y == 1, z == 2

We can use the rest pattern to pull out the rest of an array:

const x = [ 1, 2, 3, 4, 5 ];
const [ y, ...z ] = x;       // y == 1, z == [ 2, 3, 4, 5 ]

We can’t, however, use the rest pattern quite as flexibly with arrays as we can with objects. A rest must be the last part of a destructuring array assignment—so we can’t pull everything until the last element in an array, for example. If our needs are too complicated to use destructuring and the rest pattern, we’ll have to resort to the Array API.

Function Call Spreads

Function call spreads are a great way to pass an array of arguments to a function that expects each argument to be passed in separately:

function x(a, b, c) {
  return a + b + c;
}

const y = [ 1, 2, 3 ];

x(...y); // returns 6

Much like array spreads, you can also use function call spreads positionally:

function x(a, b, c) {
  return a + b + c;
}

const y = [ 2, 3 ];

x(1, ...y); // returns 6

This particular pattern gets the most use when you’re writing adapters that can work on many different kinds of functions. It allows you to save off a list of arguments and actually call the function later, without using apply.

Rest Parameters

Just like rest, function call spreads are rest parameters, which let you collect a parameter list of arbitrary length without having to work with arguments. For example:

function x(...y) {
  // for x(1, 2, 3), y is an array [ 1, 2, 3 ]
  // we'll use reduce to sum it
  return a.reduce((accumulator, value) => accumulator + value);
}

x(1, 2, 3);       // returns 6
x(1, 2, 3, 4, 5); // returns 15

Since you can use this as the inverse of spreading into a function call, you can use it in an adapter that can capture whatever arguments come in for later application.

But it’s less useful outside that sphere, in my opinion. While it might be tempting to make a function that can simply process an endless list of arguments (as above), it’s clearer to just pass an array in, with the understanding that the entire array will be processed.

One more thing: You can split your function parameters between defined and rest parameters, subject to the same restriction for arrays—the rest parameter must be the last one:

function x(y, ...z) {
  return [y, z];
}

X(1, 2, 3); // returns [ 1, [ 2, 3 ] ]

Argument Destructuring

Bringing it all together, there’s one more useful thing you can do with functions: use destructuring to pull arguments out of objects on the way in.

function x({y, ...z}) {
  return [y, z];
}

x({ y: 1, z: 2, zz: 3 }); // returns [1, { z: 2, zz: 3 }]

Everything you’ve seen above for destructuring assignments works here, including array destructuring and the rest pattern. This can be pretty handy when you need to pull apart a tiny object. But beware, if you’re dealing with a large one, you may want to shift that destructure either into the interior of the function or forgo it entirely to avoid making your function header too dense.

Hopefully, you’ve found some useful new syntax to make your JavaScript code more readable and object manipulation more convenient.

This article originally appeared on Atomic Spin.

by Mattie Behrens at January 26, 2019 01:13 PM

January 03, 2019

zigg.com (Matt Beherens' blog)

In memoriam

Content warning: death, mourning.

I've felt significant loss in the last part of 2018. We lost my spouse's father, a wonderful, kind man who loved his grandchildren. We lost my nineteen-year-old cat, the most special pet I've ever had, who loved everyone he saw and always wanted to be involved in what we were doing.

I've been thinking about what it means for someone to pass on. Religious schools of thought often teach us that the souls of the departed move on somewhere else, but as I've developed my own spirituality I've come to think differently—not least of all because this thought makes no room for the dear friend who came back, not from the dead, but from a long and saddening absence.

I know people take comfort in the religious idea that those who we've lost are in some kind of beyond-the-grave contact with those they've left behind. I believe there's merit to this—that it's our memories of them that continue to touch us.

Those who were close to us leave a deep imprint on us, and when we see them in our dreams, speaking to us about modern concerns they did not experience while they were still with us, I believe it's the collection of experiences we had with them and the patterns they impressed on us roaming our subconscious minds and building these new thoughts.

Even in our waking hours, we find the emptiness of life without these people difficult to bear. They've become a part of us, just as we were a part of them. We feel that absence whether they're just gone for a time or gone forever, and we fill that hole in our hearts with old memories, building on them and making them into something new.

In this way, I believe we can derive some comfort from what we had with those we once had with us, helping us process and mourn. We don't need to specifically embrace any given belief system to touch this—we don't need to think “well, they're gone, and that's it,” because we were all touched, down to our core, by our loved ones.

And they'll always be with us. We were changed by their presence in our lives. We were deeply enriched for having them close, and they will always live with us, until the day we pass on, leaving others with memories of not just us, but everyone that came before us as well.

And I, for one, take great comfort in that thought.

by Mattie Behrens at January 03, 2019 09:05 PM

November 03, 2018

zigg.com (Matt Beherens' blog)

Natalie Nguyen

A year ago today, a young woman named Natalie Nguyen committed suicide, and her death reverberated through the community on Mastodon that I had only been a part of for a few months. I learned about it the next day.

She was not a part of my immediate circles, though we shared many friends. I could feel the pain of her loss through them. She was a light in their lives and extinguished far too soon.

But as if it wasn't cruel enough that the world took her from those friends, what happened afterward hurt them all more. The news reports originally called her a young man. And after a brave crew of those who knew her sought out her parents and shared the Natalie they knew, those same parents buried her in a suit under a name that wasn't hers.

I'd say those friends were shocked, but it was a story they were all too familiar with. Natalie was a transgender woman, a beautiful soul, subjected to the tortures of a world that refused to accept her for who she was. So many of her friends shared that experience—the happiness of living as they were, but the pain of constant denial from those around them.

Some of our community memorialized her in the network messages that move even today through the Mastodon network, piggybacking on communications between the servers. Every time one of those servers answers a request, it says “X-Clacks-Overhead: GNU Natalie Nguyen”, keeping her memory alive.

Today, my friends are crying, remembering. I'm crying for them—I don't want them to hurt. I write this now, mostly because it's heavy on my heart and I must, but also in the hopes that some hearts, somewhere, unfamiliar with the pain our queer family shares, understands
 and perhaps takes some small action to make things better for all of us.

We all watch out for each other, however we can, in this big family I'm a part of. Many of us know what that pain is like. We hope that together, we can hold each other, be there for each other, help each other. Because we all deserve to live.

Natalie will live on in so many hearts. She touched mine, even though I never knew her. I hope that, through me, she touches yours as well.

“if my existence makes random people on the internet happier, then i did good in this world.” —Natalie Nguyen, September 16, 2017

by Mattie Behrens at November 03, 2018 04:28 PM

September 20, 2018

zigg.com (Matt Beherens' blog)

Feeling Pride at Atomic

I am a bisexual man, and last November, I came out to everyone at Atomic.

In any other job I’ve worked, I likely would have endlessly vacillated and probably just mentioned it in passing to a few coworkers. “Who needs to know?” I would have asked myself. And I would have kept quiet.

But from my friends here, I felt support. Respect. I knew that in this environment, I could bring my whole self and freely advocate for all my siblings in the LGBTQIA+ community. What I didn’t expect was how much making that move would pay off for me personally.

The day I came out to Atomic feels like so long ago now. I was surprised to go back in Slack history and find out that it was actually just a little over half a year ago. I mentioned my own orientation at the same time I was sharing Invisible Majority, a report on the disparities bisexual people face in their lives and at work, on our internal discussion channel for inclusion-related topics. That very day, another Atom raised her hand and joined me.

Maybe it feels like so long ago in part because it’s been a long journey for me to get here. Well over two decades ago, I knew something was different about me, but the culture I grew up in told me that my “something different” was wrong. It took me many years of working through a good amount of internal negativity, followed by a long stretch of hiding my true self from everyone but my spouse and a few very close friends, to get to the point where I could finally be out as who I truly am.

Along the way, I’ve seen the struggles of many people who are kept at arm’s length for who they are or how they love, but love proudly nonetheless. I’ve heard so many stories of wedges driven between family members over one’s identity, and stories of acceptance within brand-new families made up of LGBTQIA+ friends. I’ve been saddened by people having to hide who they are because it’s the only way they can function in society, but heartened to know they still believe in themselves. I’ve learned a lot about the history of pain, struggle, and victory in the LGBTQIA+ community—my community—and I want to work toward a world where we are understood and celebrated, instead of feared.

Today, we have a small, but more-than-representative group of LGBTQIA+ Atoms across both offices. We’ve celebrated with each other how good it feels to bring our whole selves to work. We have and continue to critically look inward and seek to effect change to make Atomic more inclusive. We scrambled to find something ostentatiously rainbow-colored for me to wear on my birthday earlier this year. But primarily, we are together to be a community where we understand each other.

At Atomic, we offer benefits to all Atoms’ legally-married partners. We made our restrooms clearly gender-neutral. We specifically invite all Atoms’ significant others to our social events. We joined the Michigan Competitive Workplace Coalition with the goal of updating Michigan’s civil rights law to include sexual orientation and gender identity. (I was recently very happy to hear about progress toward that goal!)

But what has ultimately touched me most has been the love and support I’ve received from several Atoms since I took that step. These Atoms have made me feel more welcome as my real self than I know I would have felt working anywhere I have before.

Being out at Atomic has been a great experience. And I want everyone, everywhere, not just at Atomic but all over Michigan, the United States, and the world to have experiences like this—to be free to live, be and—most importantly–celebrate who you are.

That’s why I was personally inspired to write this post. Nobody asked me to, though several Atoms I spoke with about the idea encouraged me. I wanted to share my experience with my siblings in the LGBTQIA+ community, as well as my hope that you have an experience like mine, wherever you are.

Happy Pride. Be true to yourself. And give your love and support to everyone, no matter who they are, or how they love.

This post originally appeared at Atomic Spin.

by Mattie Behrens at September 20, 2018 08:32 PM

Setting up Windows to build and run Node.js applications

Node.js is just JavaScript, right? So it should be really easy to run Node.js applications on Windows—just download and install Node, npm install, and go, right?

Well, for some applications, that’s true. But if you need to compile extensions, you’ll need a few more things. And, of course, with Node.js itself being constantly under development, you’ll want to lock down your development to a version your code can use. In this post, I’ll talk you through how we get our Windows command-line environments set up for the Node.js (actually, Electron) application my team is developing.

First Things First

No one wants to waste time hunting down downloads for a development environment. Instead, install Scoop first, and you’ll get a nice, clean way to add the packages you’ll need without a single web search.

Once you’ve got Scoop installed, it’s time to add some packages. For just Node.js, you’ll want the nodejs package, plus nvm for version management with NVM:

scoop install nodejs nvm

If your project uses Yarn, as ours does, you can grab that from Scoop, as well:

scoop install yarn

If you’re planning on checking out or committing code to GitHub, you’ll also want tools for that:

scoop install openssh git

To finish setting up Git with OpenSSH, note the post-install message that tells you to set up the GIT_SSH environment variable.

Finally, in case you want to quickly do things as an administrative user (which you may, later in this post!), I recommend you install Sudo, which knows how to elevate privileges inside a PowerShell session without spawning a brand new one:

scoop install sudo

Managing Node.js versions

The next thing you’ll want to do is make sure you’re on the right version of Node.js for your project. We’re using the latest LTS version for ours, which as of the time of this writing is 8.11.2. So we issue two NVM commands to install and use it:

nvm install 8.11.2
nvm use 8.11.2

If you’re familiar with NVM on Unix-like systems, you’ll find it works a little differently on Windows with Scoop. When you use a new Node.js version, it will update the binaries under scoop\apps\nvm instead of in $HOME/.nvm.

If you use a version and it doesn’t seem to be taking effect, check your PATH environment variable in the System Properties control panel (search for “environment”); it’s probably been re-ordered. Move the path containing scoop\apps\nvm to the top, and the NVM-selected version will now take precedence.

Compiling Extensions

We don’t have any of our own extensions that need building in our project, but some of our dependencies (namely, node-sass) do.

Extensions like these are built with node-gyp, and node-gyp needs two things: Python (2
 wince) and a C compiler, neither of which are standard equipment on a Windows system. If you don’t have them and you need them to build extensions, you will see a long string of gyp ERR! messages when you install dependencies.

Thankfully, there’s a reasonably easy way to install them already configured for node-gyp: windows-build-tools.

After you’ve installed the Scoop nodejs package above, and assuming you installed Sudo, you can now run:

sudo npm install --global --production windows-build-tools

Note that we have observed these installers rebooting a system at least once, which effectively aborted the process. We fixed this in this one case by re-running the installer like so:

sudo npm uninstall --global windows-build-tools
sudo npm install --global --production windows-build-tools

The Moment of Truth

If all the installations worked, you should be ready to go. For our application, a

yarn install
yarn start

was all we needed—of course, you’ll want to start your application however you do normally.

In our case, our application started up and we were off and running.

This post originally appeared on Atomic Spin.

by Mattie Behrens at September 20, 2018 08:27 PM

A JavaScript object that dynamically returns unknown properties

In our current project, we make extensive use of JavaScript objects as dictionaries, with the property name functioning as a key for the object we want to look up. We can use the in operator to test for property presence, and the dictionaries are perfectly JSON-serializable.

However, when it comes time to build test fixtures around these dictionaries for testing code that might look up lots of different keys, creating the test data for all of these keys becomes a large effort. Luckily, ES2015 has a solution.

The Old Way

Before I found this solution, I had code that looked like this:

function generateValue(key) {
  return {data: key + '-data'}
}

export const FIXTURE = {
  a: generateValue('a'),
  b: generateValue('b'),
  c: generateValue('c'),
  d: {data: 'some-real-meaningful-data'}
};

This worked, but as I mentioned, we were looking at having to build out lots of these generated values.

The New Way

Thankfully, Proxy around a JavaScript object allows us to override key behavior, including property lookups and retrieval. It turns out to be really handy for this use case.

We can keep our generateValue function, so that we generate unique values for every key in the dictionary. We can also keep any non-generated values. Our new fixture code looks like this:

export const FIXTURE = {
  d: {data: 'some-real-meaningful-data'}
};

export const MAGIC_FIXTURE = new Proxy(FIXTURE, {
  get: (target, prop) => prop in target ? target[prop] : generateValue(prop),
  has: (target, prop) => true
});

We’ve defined a new fixture, a MAGIC_FIXTURE that has special lookup behavior:

  1. For any property access, it will first check to see if the wrapped object has the requested property, and if so, return it. (This allows consumers to still access the fixed d property.) If it doesn’t exist, it generates and returns a new one on the fly.
  2. It claims to have any key requested. This allows consumers to do a check such as 'a' in MAGIC_FIXTURE—a common pattern we use in assertions in our production code to catch invalid accesses.

While working with the Proxy object for this problem, I realized I could create a new kind of dictionary as well—one that would automatically assert that a requested key was present, throwing an AssertionError if it wasn’t there:

const assert = require('assert');

function safeDictionary(dict) {
  return new Proxy(dict, {
    get: (target, prop) => {
      assert(prop in target, prop + ' key not found');
      return target[prop]
    }
  });
}

Proxy objects support lots of other behavior overrides as well, and they can be used on many things—not just basic objects like this.

Of course, you should be very careful using them. You can very easily cause unexpected behavior if you’re not careful to keep consuming code’s expectations met—but they can provide very powerful capabilities when passed into code you don’t control.

Happy Proxying!

This article originally appeared on Atomic Spin.

by Mattie Behrens at September 20, 2018 08:23 PM

Review: end-to-end encrypted notes with Standard Notes

I’ve been looking for a software solution I can trust for writing, journaling, and taking notes securely. Many options exist, but they never quite fulfilled the demands of my wishlist: multi-device, cloud-synced, end-to-end-encrypted, and open.

A few months ago, though, I discovered Standard Notes, and now I can’t imagine accepting any other solution.

Standard Notes feels like the kind of solution I’d engineer if I were calling all the shots. The service is entirely open-source, to the point that you can self-host it. It’s simple by default, giving you exactly and only what you need. It stores only end-to-end encrypted blobs of data, meaning the server never has access to your data. The software takes pains to protect your data against loss. And despite all this nerd-tier stuff, it’s very easy to get started.

As of this writing, you can sign up for the free tier on their website and start using Standard Notes immediately, with unlimited cloud-synced note storage and access to all the clients—web, mobile, and desktop. It’s almost too simple to mention.

One of the most useful features you get, even with the free tier, is Device Storage Encryption. In short, this means that even if you’re using full-disk encryption, there’s an extra layer of security to make sure that your keys are never stored unencrypted on the system, and your notes are securely encrypted whenever the app is closed. All you need to do is enable Passcode Lock in your account settings on the desktop to get this support; on iOS, just turn on Storage Encryption, and maybe Fingerprint Lock while you’re in there.

The free tier doesn’t give you access to any extensions, but it does give you the aforementioned unlimited note storage and the standard plain-text editor. I installed apps on my iPhone and my MacBook to start, turning on DSE to give my notes extra protection.

I really like having a place where I can just write
anything. Scratch space for writing something that I’m going to publish or send to someone. A quick outline of a brain dump someone is sharing. Private thoughts, journaling happenings in my life. I can do all of this on my desktop or on my phone, depending on where I am, at any time.

I never have to worry about what I write living on someone else’s server, protected by their encryption keys—everything is always under the keys only I have. Writing with this freedom is something you can’t get with other cloud-based solutions that access and/or store your unencrypted content. With this solid, secure architecture in place, I even felt comfortable recommending Standard Notes to my therapist for other patients who might find it useful for journaling.

I ran with this setup for probably a week before I decided that although I was perfectly happy with it, I wanted to both support the project and get easy access to those extensions.

Standard Notes extensions are for the desktop and web apps specifically. They run the gamut from Markdown, HTML, and Vim-emulating code editors to to-do lists and themes, as well as automatic sync, backup features, and even a feature that lets you publish selected notes to a blog.

I’m personally only using the Advanced Markdown Editor, which formats your documents live as you use Markdown conventions and offers a live preview option besides. Whatever extensions you’ve used are automatically available wherever you use the web or desktop apps, so when I added Standard Notes to the inexpensive Windows 10 laptop I picked up last year, everything worked exactly the same way it did on my MacBook.

Supporting Standard Notes feels different from subscribing to many other software services. I can actually do just about everything myself—it’s all on GitHub (including the extensions!) and I could certainly self-host it all. But I feel compelled to support this project because it’s been desperately needed in the world, filling a niche that hasn’t been adequately explored, and doing so in an amazingly open way. Its existence is a dream come true for me, and I want to make sure it’s sustainable.

If you’re looking for a place to do your writing, note-taking, or journaling, I strongly suggest you take a look at Standard Notes. I was amazed that it existed when I found it, and I’m a dedicated user and proud supporter now.

This post originally appeared on Atomic Spin.

by Mattie Behrens at September 20, 2018 07:47 PM

September 08, 2018

Whitemice Consulting

Reading BYTE Fields From An Informix Unload

Exporting records from an Informix table is simple using the UNLOAD TO command. This creates a delimited text file with a row for each record and the fields of the record delimited by the specified delimiter. Useful for data archive the files can easily be restored or processed with a Python script.

One complexity exists; if the record contains a BYTE (BLOB) field the contents are dumped hex encoded. This is not base64. To read these files take the hex encoded string value and decode it with the faux code-page hex: content.decode("hex")

The following script reads an Informix unload file delimited with pipes ("|") decoding the third field which was of the BYTE type.

rfile = open(ARCHIVE_FILE, 'r')
counter = 0
row = rfile.readline()
while row:
    counter += 1
    print(
        'row#{0} @ offset {1}, len={2}'
        .format(counter, rfile.tell(), len(row), )
    )
    blob_id, content, mimetype, filename, tmp_, tmp_ = row.split('|')
    content = content.decode("hex")
    print('  BLOBid#{0} "{1}" ({2}), len={3}'.format(
        blob_id, filename, mimetype, len(content)
    ))
    if mimetype == 'application/pdf':
        if '/' in filename:
            filename = filename.replace('/', '_')
        wfile = open('wds/{0}.{1}.pdf'.format(blob_id, filename, ), 'wb')
        wfile.write(content)
        wfile.close()

by whitemice at September 08, 2018 08:05 PM

May 29, 2018

Whitemice Consulting

Disabling Transparent Huge Pages in CentOS7

The THP (Transparent Huge Pages) feature of modern LINUX kernels is a boon for on-metal servers with a sufficiently advanced MMU. However they can also result in performance degradation and inefficiently memory use when enabled in a virtual machine [depending on the hypervisor and hosting provider]. See, for example "Use of large pages can cause memory to be fully allocated". If you are issues in a virtualized environment that point towards unexplained memory consumption it may be worthwhile to experiment with disabling THP in your guests. These are instructions for controlling the THP feature through the use of a SystemD unit.

Create the file /etc/systemd/system/disable-thp.service:

[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target

Enable the new unit:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

THP will now be disabled. However already allocated huge pages are still active. Rebooting the server is advised to bring up the services with THP disabled.

by whitemice at May 29, 2018 07:30 PM

May 06, 2018

Whitemice Consulting

Informix Dialect With CASE Derived Polymorphism

I ran into an interesting issue when using SQLAlchemy 0.7.7 with the Informix dialect. In a rather ugly database (which dates back to the late 1980s) there is a table called "xrefr" that contains two types of records: "supersede" and "cross". What those signify doesn't really matter for this issue so I'll skip any further explanation. But the really twisted part is that while a single field distinquishes between these two record types - it does not do so based on a consistent value. If the value of this field is "S" then the record is a "supersede", any other value (including NULL) means it is a "cross". This makes creating a polymorphic presentation of this schema a bit more complicated. But have no fear, SQLAlchemy is here!

When faced with a similar issue in the past, on top of PostgreSQL, I've created polymorphic presentations using CASE clauses. But when I tried to do this using the Informix dialect the generated queries failed. They raised the dreaded -201 "Syntax error or access violation" message.

The Informix SQLCODE -201 is in the running for "Most useless error message ever!". Currently it is tied with PHP's "Stack Frame 0" message. Microsoft's "File not found" [no filename specified] is no longer in the running as she is being held at the Hague to face war crimes charges.

Rant: Why do developers get away with such lazy error messages?

The original [failing] code that I tried looked something like this:

    class XrefrRecord(Base):
        __tablename__  = 'xrefr'
        record_id      = Column("xr_serial_no", Integer, primary_key=True)
        ....
        _supersede     = Column("xr_supersede", String(1))
        is_supersede   = column_property( case( [ ( _supersede == 'S', 1, ), ],
                                                else_ = 0 ) )

        __mapper_args__ = { 'polymorphic_on': is_supersede }   


    class Cross(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 0} 


    class Supsersede(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 1}

The generated query looked like:

      SELECT xrefr.xr_serial_no AS xrefr_xr_serial_no,
             .....
             CASE
               WHEN (xrefr.xr_supersede = :1) THEN :2 ELSE :3
               END AS anon_1
      FROM xrefr
      WHERE xrefr.xr_oem_code = :4 AND
            xrefr.xr_vend_code = :5 AND
            CASE
              WHEN (xrefr.xr_supersede = :6) THEN :7
              ELSE :8
             END IN (:9) &lt;--- ('S', 1, 0, '35X', 'A78', 'S', 1, 0, 0)

At a glance it would seem that this should work. If you substitute the values for their place holders in an application like DbVisualizer - it works.

The condition raising the -201 error is the use of place holders in a CASE WHEN structure within the projection clause of the query statement; the DBAPI module / Informix Engine does not [or can not] infer the type [cast] of the values. The SQL cannot be executed unless the values are bound to a type. Why this results in a -201 and not a more specific data-type related error... that is beyond my pay-grade.

An existential dilemma: Notice that when used like this in the projection clause the values to be bound are both input and output values.

The trick to get this to work is to explicitly declare the types of the values when constructing the case statement for the polymorphic mapper. This can be accomplished using the literal_column expression.

    from sqlalchemy import literal_column

    class XrefrRecord(Base):
        _supersede    = Column("xr_supersede", String(1))
        is_supersede  = column_property( case( [ ( _supersede == 'S', literal_column('1', Integer) ) ],
                                                   else_ = literal_column('0', Integer) ) )

        __mapper_args__     = { 'polymorphic_on': is_supersede }

Visually if you log or echo the statements they will not appear to be any different than before; but SQLAlchemy is now binding the values to a type when handing the query off to the DBAPI informixdb module.

Happy polymorphing!

by whitemice at May 06, 2018 08:23 PM

Sequestering E-Mail

When testing applications one of the concerns is always that their actions don't effect the real-world. One aspect of that this is sending e-mail; the last thing you want is the application you are testing to send a paid-in-full customer a flurry of e-mails that he owes you a zillion dollars. A simple, and reliable, method to avoid this is to adjust the Postfix server on the host used for testing to bury all mail in a shared folder. This way:

  • You don't need to make any changes to the application between production and testing.
  • You can see the message content exactly as it would ordinarily have been delivered.

To accomplish this you can use Postfix's generic address rewriting feature; generic address rewriting processes addresses of messages sent [vs. received as is the more typical case for address rewriting] by the service. For this example we'll rewrite every address to shared+myfolder@example.com using a regular expression.

Step#1

Create the regular expression map. Maps are how Postfix handles all rewriting; a match for the input address is looked for in the left hand [key] column and rewritten in the form specified by the right hand [value] column.

echo "/(.)/           shared+myfolder@example.com" &gt; /etc/postfix/generic.regexp

Step#2

Configure Postfix to use the new map for generic address rewriting.

postconf -e smtp_generic_maps=regexp:/etc/postfix/generic.regexp

Step#3

Tell Postfix to reload its configuration.

postfix reload

Now any mail, to any address, sent via the hosts' Postfix service, will be driven not to the original address but to the shared "myfolder" folder.

by whitemice at May 06, 2018 08:11 PM

April 22, 2018

Whitemice Consulting

LDAP extensibleMatch

One of the beauties of LDAP is how simply it lets the user or application perform searching. The various attribute types hint how to intelligently perform searches such as case sensitivity with strings, whether dashes should be treated as relevant characters in the case of phone numbers, etc... However, there are circumstances when you need to override this intelligence and make your search more or less strict. For example: in the case of case sensitivity of a string. That is the purpose of the extensibleMatch.

Look at this bit of schema:

attributetype ( 2.5.4.41 NAME 'name'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )
attributetype ( 2.5.4.4 NAME ( 'sn' 'surname' )
DESC 'RFC2256: last (family) name(s) for which the entity is known by'
SUP name )

The caseIgnoreMatch means that searches on attribute "name", or its descendant "sn" (used in the objectclass inetOrgPerson), are performed in a case insensitive manner. So...

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam sn=williams dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
SASL SSF: 128
SASL installing layers
# Adam Williams, People, Entities, SAM, whitemice.org
dn: cn=Adam Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org
# Michelle Williams, People, Entities, SAM, whitemice.org
dn: cn=Michelle Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org

... this search returns two objects where the sn value is "Williams" even though the search string was "williams".

If for some reason we want to match just the string "Williams", and not the string "williams" we can use the extensibleMatch syntax.

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam "(sn:caseExactMatch:=williams)" dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
search: 3
result: 0 Success
estate1:~ #

No objects found as both objects have "williams" with an initial capital letter.

Using extensibleMatch I was able to match the value of "sn" with my own preference regarding case sensitivity. The system for an extensibleMatch is "({attributename}:{matchingrule}:{filterspec})". This can be used inside a normal LDAP filter along with 'normal' matching expressions.

For more information on extensibleMatch see RFC2252 and your DSA's documentation [FYI: Active Directory is a DSA (Directory Service Agent), as is OpenLDAP, or

by whitemice at April 22, 2018 03:14 PM

Android, SD cards, and exfat

I needed to prepare some SD cards for deployment to Android phones. After formatting the first SD card in a phone I moved it to my laptop and was met with the "Error mounting... unknown filesystem type exfat" error. That was somewhat startling as GVFS gracefully handles almost anything you throw at it. Following this I dropped down to the CLI to inspect how the SD card was formatted.

awilliam@beast01:~> sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 62.5 GiB, 67109912576 bytes, 131074048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *     2048 131074047 131072000 62.5G  7 HPFS/NTFS/exFAT

Seeing the file-system type I guessed that I was missing support for the hack that is exFAT [exFAT is FAT tweaked use on large SD cards]. A zypper search exfat found two uninstalled packages; GVFS is principally an encapsulation of fuse that adds GNOME awesome into the experience - so the existence of a package named "fuse-exfat" looked promising.

I installed the two related packages:

awilliam@beast01:~> sudo zypper in exfat-utils fuse-exfat
(1/2) Installing: exfat-utils-1.2.7-5.2.x86_64 ........................[done]
(2/2) Installing: fuse-exfat-1.2.7-6.2.x86_64 ........................[done]
Additional rpm output:
Added 'exfat' to the file /etc/filesystems
Added 'exfat_fuse' to the file /etc/filesystems

I removed the SD card from my laptop, reinserted it, and it mounted. No restart of anything required. GVFS rules! At this point I could move forward with rsync'ing the gigabytes of documents onto the SD card.

It is also possible to initially format the card in the openSUSE laptop as well. Partition the card creating a partition of type "7" and then use mkfs.exfat to format the partition. Be careful to give each card a unique ID using the -n option.

awilliam@beast01:~> sudo mkfs.exfat  -n 430E-2980 /dev/mmcblk0p1
mkexfatfs 1.2.7
Creating... done.
Flushing... done.
File system created successfully.

The mkfs.exfat command is provided by the exfat-utils package; a filesystem-utils package exists for most (all?) supported file-ystems. These -utils packages provide the various commands to create, check, repair, or tune the eponymous file-ystem type.

by whitemice at April 22, 2018 02:34 PM

April 03, 2018

Whitemice Consulting

VERR_PDM_DEVHLPR3_VERSION_MISMATCH

After downloading a Virtualbox ready ISO of OpenVAS the newly created virtual machine to host the instance failed to start with an VERR_PDM_DEVHLPR3_VERSION_MISMATCH error. The quick-and-dirty solution was to set the instance to use USB 1.1. This setting is changed under Machine -> Settings -> USB -> Select USB 1.1 OHCI Controller.. After that change the instance now boots and runs the installer.

virtualbox-qt-5.1.34-47.1.x86_64
virtualbox-5.1.34-47.1.x86_64
virtualbox-host-kmp-default-5.1.34_k4.4.120_45-47.1.x86_64
kernel-default-4.4.120-45.1.x86_64
openSUSE 42.3 (x86_64)

by whitemice at April 03, 2018 12:21 PM

March 19, 2018

zigg.com (Matt Beherens' blog)

Why a no-moonlighting guideline benefits employees

I had an old employer reach out to me the other day asking if I’d like to do some contract work for them. As I have in all these situations, I recalled Atomic’s guideline for Atoms—we should not do work on the side that competes or conflicts with Atomic’s business.

While it’s immediately clear how such a guideline protects Atomic’s business, I’ve also found that it’s really helpful for me personally.

Sustainable pace is an important Atomic value—one that attracted me strongly to becoming an Atom in the first place. It’s something I strive to live out personally, and something I watch my fellow Atoms for, so I can help support them if they’re feeling stress and are at risk of spending more energy than they have.

Atoms commit to a roughly forty-hour week, spending the majority of that delivering value to clients, and a small part sharing responsibility for the business and for each other. We go home and pursue other interests every day, which keeps us in balance, not just to give us the energy to do good work for our clients the next day, but also to make us richer human beings.

Moonlighting threatens sustainable pace by asking us to push past that sustainable pace. It erodes our ability to be the best we can be during the day, as well as after we close our computers and leave for the day. It turns us from healthy human beings into constantly-drained machines, never getting the chance to recharge our brains, wiring them to do just one specific thing instead of being all that we can be.

Working for a past employer again specifically can also stunt our growth. Positions we’ve held in the past are part of us; they have made us better consultants by giving us a wide range of experiences. But returning to those positions is often a return to old mental pathways well-explored; it’s better for both us and those employers that new people come on to bring new perspectives and add to their own experience. Atomic can even help them here, if it makes sense for them to work with us, by letting them work with new-to-them faces from our own team.

To be all you can be as a consultant, and as a human being, I believe diversity of experience is critical. Being able to focus on each challenge at Atomic in turn as we move from project to project, and being able to put it all down, live and have a healthy balance in our lives, makes us stronger at our jobs as well as better human beings.

And that’s why I have to politely decline when an old employer asks if I’d like to do work on the side for them, and why I steer them toward working with us, if it’s appropriate. Moonlighting is not just something that’s in competition with Atomic; it’s very much in competition with me being my best self.

This post originally appeared on Atomic Spin.

by Mattie Behrens at March 19, 2018 01:01 PM

March 11, 2018

Whitemice Consulting

AWESOME: from-to Change Log viewer for PostgreSQL

Upgrading a database is always a tedious process - a responsible administrator will have to read through the Changelog for every subsequent version from the version ze is upgrading from to the one ze is upgrading to.

Then I found this! This is a Changelog viewer which allows you to select a from and a to version and shows you all the changelogs in between; on one page. You still have to read it, of course, but this is a great time saver.

by whitemice at March 11, 2018 01:15 AM

January 17, 2018

Whitemice Consulting

Discovering Informix Version Via SQL

It is possible using the dbinfo function to retrieve the engine's version information via an SQL command:

select dbinfo('version','full') from sysmaster:sysdual

which will return a value like:

IBM Informix Dynamic Server Version 12.10.FC6WE
Tags: 

by whitemice at January 17, 2018 08:56 PM

October 31, 2017

zigg.com (Matt Beherens' blog)

Resetting a Wacom Bamboo Spark

Last week, I turned on my Wacom Bamboo Spark smartpad (no longer available, but Wacom has other smartpad models) and the two indicator lights started flashing alternately like a railroad crossing signal.

I could go through the Inkspace re-pairing process successfully, despite the lights never flashing, but the Spark would no longer recognize or record—or at the very least, would not sync—any additional handwritten notes I would make.

I contacted Wacom on Tuesday. After several days of silence, I finally tweeted angrily at them. Some DMs later and, that night, I had some instructions in my inbox on how to reset my Spark that were not available on their support site.

Here's how you reset a Wacom Bamboo Spark, using an iOS device with the Inkspace app installed.

  1. Tap the Settings menu (gear icon) in the upper-right corner of the app.

  2. Select “Your Device”.

  3. Select “Pair Device”.

  4. Turn the Spark on, and select “Next”.

  5. Hold the Spark's page button until Inkspace shows “Select your device”.

  6. Select your device from the list and select “Next”.

  7. Press the Spark's page button to confirm.

  8. Tap five times on the “Enter a unique name” label.

  9. Confirm the “Device Memory Reset” dialog by selecting “Reset”.

  10. Continue with the pairing process until complete.

I hope this helps someone out—I enjoy my Spark and was quite put out at not being able to digitize notes for a week.

by Mattie Behrens at October 31, 2017 01:51 PM

October 09, 2017

Whitemice Consulting

Failure to apply LDAP pages results control.

On a particular instance of OpenGroupware Coils the switch from an OpenLDAP server to an Active Directory service - which should be nearly seamless - resulted in "Failure to apply LDAP pages results control.". Interesting, as Active Directory certainly supports paged results - the 1.2.840.113556.1.4.319 control.

But there is a caveat! Of course.

Active Directory does not support the combination of the paged control and referrals in some situations. So to reliably get the page control enable it is also necessary to disable referrals.

...
dsa = ldap.initialize(config.get('url'))
dsa.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
dsa.set_option(ldap.OPT_REFERRALS, 0)
....

Disabling referrals is likely what you want anyway, unless you are going to implement referral following. Additionally, in the case of Active Directory the referrals rarely reference data which an application would be interested in.

The details of Active Directory and pages results + referrals can be found here

by whitemice at October 09, 2017 03:03 PM

August 31, 2017

Whitemice Consulting

opensuse 42.3

Finally got around to updating my work-a-day laptop to openSUSE 42.3. As usual I did an in-place distribution update via zypper. This involves replacing the previous version repositories with the current version repositories - and then performing a dup. And as usual the process was quick and flawless. After a reboot everything just-works and I go back to doing useful things. This makes for an uninteresting BLOG post, which is as it should be.

zypper lr --url
zypper rr http-download.opensuse.org-f7da6bb3
zypper rr packman
zypper rr repo-non-oss
zypper rr repo-oss
zypper rr repo-update-non-oss
zypper rr repo-update-oss
zypper rr server:mail
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/non-oss/ repo-non-oss
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ repo-oss
zypper ar http://download.opensuse.org/repositories/server:/mail/openSUSE_Leap_42.3/ server:mail
zypper ar http://download.opensuse.org/update/leap/42.3/non-oss/ repo-update-non-oss
zypper ar http://download.opensuse.org/update/leap/42.3/oss/ repo-update-oss
zypper ar http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.3 packman
zypper lr --url  # double check
zypper ref  # refesh
zypper dup --download-in-advance  # distribution update
zypper up  # update, just a double check
reboot

Done.

by whitemice at August 31, 2017 12:49 PM

July 31, 2017

zigg.com (Matt Beherens' blog)

A PyOhio emergency

As you may have seen, I was at PyOhio this weekend, and I was t{wee,oo}ting a lot. As such, my Apple Watch was going nuts with RTs, faves, &c. I was talking to some people in a hallway and force-pressed to clear my notifications
 and somehow the “Clear All” button got stuck on my watch screen.

I kept pressing it, and nothing happened. So I tried holding down the button that normally would bring up the power-off screen


Watch: (loudly) WHOOP WHOOP

Apple Watch has a feature that I've never had occasion to try: if you hold down the function button for even longer than it takes to get the power-off screen, it will go into emergency mode and eventually call the local emergency dispatch service.

I'm now in full panic mode. I pull out my phone and try to determine whether it's calling 911. I can't
 tell. I'm frantically searching Apple's support site to find out how to hard-power-down my watch, pronto, all while holding the watch to my ear to make sure that if a voice starts coming out of it asking what my emergency is, I'm ready to explain and apologize profusely.

Finally, I find the article. “Hold both buttons down till you see the Apple logo,” it says. I did this, and finally, finally, the watch definitively powers down and I reboot it.

As far as I can tell, emergency dispatchers were never summoned, and nobody at the conference got in trouble, particularly yours truly. And I amused the people I was talking with, a little.

Phew. 😅

by Mattie Behrens at July 31, 2017 09:24 AM

July 19, 2017

zigg.com (Matt Beherens' blog)

Lessons learned losing

This morning I woke up, weighed myself, and found I'd finally lost 50 pounds since I started pursuing weight loss in earnest in September 2016.

This isn't my first time here. Back in 2008, when Wii Fit originally came out, I also shed a good amount of weight, landing about 5 pounds or so over where I am today. But that loss wasn't as good as it could have been:

  • Intellectually, I knew that losing 5 pounds a week was unsustainable, but I had no problem shedding somewhere under that so long as I wasn't collapsing on the floor.
  • I was losing primarily by following a rule I made up for myself: “eat half of what you want to eat” rule. This left me with the mindset that I was always shortchanging myself.
  • Once I hit a place where I felt pretty good about my weight, I declared myself done. Now I could just “be healthy” without much effort.
  • Perhaps the most important: I was always pushing myself, using every last bit of energy I had on burning and focusing on eating less. I could do this then because I had a job that didn't demand much, but I was always on the verge of falling apart if something else happened in my life.

Given this, it shouldn't be any surprise that years later, all that work was undone—and then some.

Work requires energy

I struggled with the fact that I'd lost the fruits of my hard work for several years. It fed into an undercurrent of depression in my life. For a few years, I had decided that I was hopeless; that I couldn't lose weight.

Paradoxically, my new, very stimulating work at Atomic Object compounded this, by being an additional demand on my limited reserve of energy. The idea of eating less or exercising was something I tried to push through, but I'd fall flat, tired and running back to the arms of too much food and too little movement.

What finally broke this chain was a visit to my doctor. It looked like people around me were able to keep these balls in the air; was there something wrong with me? My doctor is a great listener and has an uncannily good sense of where I might want to explore a complaint, even when I'm verbally tripping over myself trying to explain what's wrong—and while he took a blood test too, he suggested that I was probably suffering from depression, and that I should consult a sleep doctor.

Sleep being a problem was something I'd never considered before. I knew I “snored a little”, but I usually fell asleep fairly readily and was able to drag myself out of bed in the morning—surely I didn't actually have a sleep disorder? Gamely, I set myself up with that appointment (pro tip: it takes a long-ass time to see a sleep doctor, and several more long-ass times to get the study, &c.—don't delay making that appointment if you think it can help you!) and simultaneously started seeing a therapist to address my depression and anxiety.

I have foggy memories of how I was when I started seeing my therapist. More importantly, I have journal entries I made from that time. I came in with a preconcieved list of reasons I thought I was suffering, things we explored but ultimately thinned out considerably. It was simply hard for me to deal with just about any adversity in my life; I'd break down. I wasn't enjoying a lot of things I used to enjoy in my life even when there wasn't anything trying to keep me down.

We worked on those things, and made progress, little by little. In addition to teaching me techniques to work on being mindful of my emotions—in particular, the one about conscious breathing and setting aside negative thoughts in particular was huge. He encouraged me to keep working on me, by pushing through the demands of sleep medicine and just do a little bit of exercise each day, always looking forward and not dwelling on the past.

The day I finally saw the sleep doctor, fortune had it that I was a little bit of a wreck. My home sleep study was months out—sleep medicine in general seems to be woefully under-resourced, which is shocking considering how many people are living with undiagnosed sleep disorders. I was a little teary, but I pushed through and became my own advocate, and walked out with a promise to slot me in if there were any cancellations and a few sample packs of Silenor to help me get through the period until I could pick up the sleep study equipment.

It took a lot out of me that day to advocate for myself, to press my case, but I'm so glad I did.

Turning point

Silenor (doxepin) is a good drug, but it's also yet another example of what's wrong with American medicine. Those sample packs helped me get what was probably the most consistent sleep I'd had in a long time. But when they ran out, the little pills were amazingly expensive and not something my insurance was interested in paying for unless I'd tried a litany of other sleep medicines first. Sorry, but fuck that—this worked, it wasn't addictive, I hoped that it was short-term.

Thankfully, I had a resourceful sleep doctor who noted that doxepin has actually been on the market a very long time, as the antidepressant Sinequan—and has a readily-available and very cheap generic, if you don't mind it coming in a disgustingly minty syrup form for some reason. I mixed mine with a glass of water every night.

With doxepin helping me sleep more deeply, my life started to change. I had more energy to work on myself with the assistance of my therapist. I had more energy to start to look at what I was eating and continue to engage in a little bit of physical activity several days a week. I was able to deal with life when it decided it hated me. I started weighing myself and tracking my exercise.

Finally, I was able to have the sleep study. The results came back in a few days. I'll always remember the call from my doctor and my internal reactions to it:

Doctor: You stopped breathing 8-10 times


Me: That's not so bad


Doctor: 
per hour


Me: OH MY G–

Doctor: 
which is mild sleep apnea.

Me: Ha ha. What.

Several months later, thanks to some really awful insurance confusion—the sleep office thought that I could have a “titration study” to figure out what level of air pressure I'd need from my sleep machine as well as what mask fit me well; insurance didn't think I had a bad enough time to justify that and could just go home with a machine set wide open; and all of this just meant delays, delays, delays—I stopped taking the doxepin and started strapping a mask to my face every night.

It took a little while to adjust—wide-open APAP machines have a bad habit of giving you so much pressure that the mask tries to lift off your face; the sleep techs lowered the max pressure to fix that—but about a month and several changes later, I found myself with the energy I needed to get started.

Working on me

Being able to deal with depression and having better sleep equipped me to be able to work on myself like I knew I needed to. I set several goals for myself:

  • I would aim first for losing 50 pounds, at a rate of about 1-2 pounds per week.
  • I would track everything I ate and set a calorie goal for each day. (I'm using MyFitnessPal, which in itself took some effort—I quit using it once in the past because I couldn't stand the poor quality of the nutrition database, and I also wasn't super-thrilled about data in the cloud. The secret? Accept that some things are imperfect. Like many judged medical interventions, using it is more helpful than not using it.)
  • I would work up to exercising 30 minutes a day most days of the week.

In the beginning, it was really easy to lose. I shed over 2 pounds a week, and pushed to rein myself in. There are a lot of reasons for this that I've heard, ranging from losing water weight (something that happens to just about everyone when they change their digestive equilibrium and usually looks really dramatic and encouraging) to the fact that I was pretty sedentary—just thinking about getting off the couch was enough to get my heart rate to 150 bpm.

And so I started in on this. Several days a week, I'd ride my stationary bike for ten minutes. I was wearing an Apple Watch at this point (something I'd bought just for notifications, but it turned out the fitness tracking was super helpful), so I knew that the calories I lost were very close to just noise—but moving helped me feel better, and helped me up that energy level just a little bit more. Soon I bumped it up to fifteen. Twenty. Pulled myself back whenever I started to feel run-down, but made sure to do something every day.

I tried hard not to beat myself up if I missed a day, be it for injury or maybe a wave of depression. This happens. It's a little hard to imagine it happening, but I have the journal entries to prove it, and I remember the therapy sessions where I was encouraged not to keep talking about myself so negatively. Just get up and hit it again the next day. Make it a goal every day to get it done.

In the meantime, I worked on religiously tracking everything that went into my body. I'd eat about 1,500-2,000 calories a day (and also vowed never to go below that unless advised by a doctor—I was looking to establish healthy habits, not die of malnutrition) and watch the effect on the weight chart.

I started looking harder at what I ate, too: I looked for foods that helped me maintain energy while having a low calorie load. This has gradually shifted my diet, reducing but not eliminating carbs and consuming a lot more protein. Keeping an eye on that balance. Watching my body's reaction.

I started walking to break up the stationary biking monotony. At first, this was a huge calorie burn, especially because my heart rate went pretty high every time I exerted myself more than a little bit. Over time, it became easier; today, I need to keep a pretty quick pace to even approach 120 bpm. (I don't run—I have some ankle problems, and surgery is not a great option for me. But walking is okay.)

Through it all, I kept up therapy, and I worked hard sticking with that APAP machine to keep my sleep going. There were times where I slid back on one or the other, and let me tell you, if one vertex of that triangle of sleep, emotional well-being, and physical well-being becomes stressed, it pulls on the other two. But I came back out every time and got things back into balance, and I'm super proud of myself for it—and it's now so much easier to get things back on track the next time, because I know what to do, and I know what works for me.

Into the future

It's much harder to lose weight where I am now—I'm only shedding about ⅓ pound a week now, and sometimes it feels like a push—for awhile I thought I might never get to write this post. But on those days, I draw a line on how much I'll push myself, and remind myself that hey, you got this far, you're staying here and not sliding back, and that's awesome. A month ago, I even started bicycling four miles to the bus stop to work and four miles home again. I never would have dreamed that would have been me a year ago, but here we are. As I write this, I've got my silly little bike helmet next to me. It feels great.

At my last physical, my doctor's PA (who is also really good—good doctors are so important) and I agreed on a target weight that's still another 30 pounds away; that seems like it's going to take awhile. I'm planning on going for it, but you know what? I'm so much happier where I am today. If it turned out that my 40-year-old body was at its equilibrium point here at the end of all things, I'd be 100% content with being where I am. I feel good (most of the time), I sleep well (most of the time), and I'm able to deal with my anxiety and depression like a ninja (most of the time). It's a great place to be.

So what should anyone else take away from all of this? I'm not trying to be a diet and exercise guru, but I do know this:

  • I never would have made it this far without that first visit to the doctor setting me on my way.
  • I needed energy to succeed at losing weight and getting fitter, and I wasn't going to get it without solving the problems that were draining that energy away.
  • Physical well-being, getting good sleep, and emotional well-being are three vertices of a triangle that needs to be in equilibrium. If one vertex gets stressed, it pulls on the rest.

I sometimes wish it hadn't taken me 40 years to learn these things. But that's okay. I know it now, and with that knowledge I'm changing the future.

One more lesson

There's one other high-level lesson I'm pleased to have pulled from this: science wins. Sleep science knows that sleep deprivation affects you negatively. Psychological science knows that mental illnesses affect you negatively. Dietary science knows that calorie restriction is the only proven way to lose weight. Exercise science knows that exercise makes your body work better. All of these lessons have been proven out, and I was fortunate to have good professionals helping me along at every step of the way.

I didn't get any of this from exploitative, pseudoscientific charlatans, whose thumbs I suspect many people are struggling under today. You can get a quick “win” feeling from a product that sounds cool but has no proven benefit (and possibly some awful risks), but you won't improve your life from anything that product gives you.

So, please. Stick to science—because I care about you and I want you to get better. Change your doctors if you're not getting what you need. Work at everything that drains you, a little bit at a time. Advocate for your own care. Advocate for the care of others—yes, this includes doing so politically! But don't throw your money or encourage others to throw theirs at unscientific “wellness”; it can't help you.

I may not know you, dear reader, personally. But I care about you and I want you to work on yourself. Your needs are almost very different from mine, but maybe my story can help encourage you to attack what's draining your energy and remove it from your life, giving you what you need to improve yourself. Maybe you even have sleep or mental illness to deal with specifically, in which case, great! Talk to your doctor about this. Talk to them about whatever it is that's draining you.

But most of all, love yourself by treating yourself well. You deserve it.

by Mattie Behrens at July 19, 2017 08:23 AM

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://pdc.example.com'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        LDAP_ROOT_DN, ldap.SCOPE_SUBTREE,
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    )
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))

    pdc.unbind()

by whitemice at June 06, 2017 12:11 AM

May 27, 2017

zigg.com (Matt Beherens' blog)

Retiring brewdo

It's been a long time since I've written a post just for this site, hasn't it?

Nearly three years ago, I joined Atomic Object. (I actually started in August 2014. We tend to publish our welcome posts a little while after new Atoms settle in. Also, holy cow. Look at me then and now. I guess I have lost a lot of weight!)

At the time, I was pretty actively blogging here and exploring my role in the tech community, sharing projects on my GitHub both actively-used and experimental, and even trying out giving talks. In retrospect, I was doing a lot. And it took a lot of time in addition to my now much-more-engaging work as an Atom, which is probably why my GitHub contribution graph seems to have dropped off since that August.

And so I come now to some housekeeping I've been doing today. One key piece of that housekeeping was deciding what to do with brewdo.

You can read up on why I originally made brewdo here on the blog. Since then, Homebrew has added their own sandbox, which addressed the most important thing that brewdo does that I care about. I've been running Homebrew in $HOME/Library/Homebrew with that support in play for some time, and I've been really happy with it.

So I think now is the time to mark brewdo as unmaintained. I get bug reports on it every so often, mostly having to do with migration or installation, problems that take a lot of effort to even work on. And I just don't have the personal energy for it. I want to make room in my life for other things.

I'll be doing that shortly. If someone wants to take over brewdo, I'd be thrilled to pass it on to them! Just get in touch.

by Mattie Behrens at May 27, 2017 02:48 PM

Virtual network customization in VMware Fusion

VMware Fusion is a powerful tool for developers that need to virtualize systems. Its networking functionality is also powerful, but somewhat hidden. In my latest post, I dive into customizing virtual networks over at Atomic Spin.

by Mattie Behrens at May 27, 2017 02:06 PM

Security hygiene for software professionals

A topic near and dear to my, and I hope every software professional's heart is how to be as secure as possible. I've covered a number of ways you can practice good security hygiene over at Atomic Spin.

by Mattie Behrens at May 27, 2017 01:53 PM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit adam@example.com
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials

Huh.

[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'ï»żdn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O http://epel.mirror.freedomvoice.com/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: libjson.so.0()(64bit) for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libcfitsio.so.0()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libspatialite.so.2()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
...

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"

Celebrate

PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616

Step#5

Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM