Planet GRPUG

September 11, 2019

Whitemice Consulting

PostgreSQL: Casted Indexes

Dates in databases are a tedious thing. Sometimes a time value is recorded as a timestamp, at other times - probably in most cases - it is recorded as a date. Yet it can be useful to perform date-time queries using a representation of time distinct from what is recorded in the table. For example a database which records timestamps, but I want to look-up records by date.

To this end PostgreSQL supports indexing a table by a cast of a field.

Create A Sample

testing=> CREATE TABLE tstest (id int, ts timestamp);
CREATE TABLE
testing=> INSERT INTO TABLE tstest (1,'2018-09-01 12:30:16');
INSERT 0 1
testing=> INSERT INTO TABLE tstest (1,'2019-09-02 10:30:17');
INSERT 0 1

Create The Index

Now we can use the "::" operator to create an index on the ts field, but as a date rather than a timestamp.

testing=> create index tstest_tstodate on dtest ((ts::date));
CREATE INDEX

Testing

Now, will the database use this index? Yes, provided we cast ts as we do in the index.

testing=>SET ENABLE_SEQSCAN=off;
SET
testing=> EXPLAIN SELECT * FROM tstest WHERE ts::date='2019-09-02';
                                 QUERY PLAN                                  
-----------------------------------------------------------------------------
 Index Scan using tsest_tstodate on tstest  (cost=0.13..8.14 rows=1 width=12)
   Index Cond: ((ts)::date = '2019-09-02'::date)
(2 rows)

For demonstration it is necessary to disable sequential scanning, ENABLE_SEQSCAN=off, otherwise with a table this small the PostgreSQL will never use any index.

Casting values in an index can be a significant performance win when you frequently query data in a form differing than its recorded form.

Tags: 

by whitemice at September 11, 2019 03:09 PM

August 30, 2019

Whitemice Consulting

Listing Printer/Device Assignments

The assignment of print queues to device URIs can be listed from a CUPS server using the "-v" option.

The following authenticates to the CUPS server cups.example.com as user adam and lists the queue and device URI relationships.

[user@host ~]# lpstat -U adam -h cups.example.com:631 -v | more
device for brtlm1: lpd://cismfp1.example.com/lp
device for brtlp1: socket://lpd02914.example.com:9100
device for brtlp2: socket://LPD02369.example.com:9100
device for brtmfp1: lpd://brtmfp1.example.com/lp
device for btcmfp1: lpd://btcmfp1.example.com/lp
device for cenlm1: lpd://LPD04717.example.com/lp
device for cenlp: socket://LPD02697.example.com:9100
device for cenmfp1: ipp://cenmfp1.example.com/ipp/
device for ogo_cs_sales_invoices: cups-to-ogo://attachfs/399999909/${guid}.pdf?mode=file&pa.cupsJobId=${id}&pa.cupsJobUser=${user}&pa.cupsJobTitle=${title}
device for pdf: ipp-to-pdf://smtp
...

by whitemice at August 30, 2019 07:36 PM

Reprinting Completed Jobs

Listing completed jobs

By default the lpstat command lists the queued/pending jobs on a print queue. However the completed jobs still present on the server can be listed using the "-W completed" option.

For example, to list the completed jobs on the local print server for the queue named "examplep":

[user@host] lpstat -H localhost -W completed examplep
examplep-8821248         ogo             249856   Fri 30 Aug 2019 02:17:14 PM EDT
examplep-8821289         ogo             251904   Fri 30 Aug 2019 02:28:04 PM EDT
examplep-8821290         ogo             253952   Fri 30 Aug 2019 02:28:08 PM EDT
examplep-8821321         ogo             249856   Fri 30 Aug 2019 02:34:48 PM EDT
examplep-8821333         ogo             222208   Fri 30 Aug 2019 02:38:16 PM EDT
examplep-8821337         ogo             249856   Fri 30 Aug 2019 02:38:50 PM EDT
examplep-8821343         ogo             249856   Fri 30 Aug 2019 02:39:31 PM EDT
examplep-8821351         ogo             248832   Fri 30 Aug 2019 02:41:46 PM EDT
examplep-8821465         smagee            1024   Fri 30 Aug 2019 03:06:54 PM EDT
examplep-8821477         smagee          154624   Fri 30 Aug 2019 03:09:38 PM EDT
examplep-8821493         smagee          149504   Fri 30 Aug 2019 03:12:09 PM EDT
examplep-8821505         smagee           27648   Fri 30 Aug 2019 03:12:36 PM EDT
examplep-8821507         ogo             256000   Fri 30 Aug 2019 03:13:26 PM EDT
examplep-8821562         ogo             251904   Fri 30 Aug 2019 03:23:14 PM EDT

Reprinting a completed job

Once the job id is known, the far left column of the the lpstat output, the job can be resubmitted using the lp command.

To reprint the job with the id of "examplep-8821343", simply:

[user@host] lp -i examplep-8821343 -H restart

by whitemice at August 30, 2019 07:29 PM

Create & Deleting CUPs Queues via CLI

Create A Print Queue

[root@host ~]# /usr/sbin/lpadmin -U adam -h cups.example.com:631 -p examplelm1 -E \
  -m "foomatic:HP-LaserJet-laserjet.ppd" -D "Example Pick Ticket Printer"\
   -L "Grand Rapids" -E -v lpd://printer.example.com/lp

This will create a queue named examplelm1 on the host cups.example.com as user adam.

  • "-D" and "-L" specify the printer's description and location, respectively.
  • The "-E" option, which must occur after the "-h" and -p" options instructs CUPS to immediately set the new print queue to enabled and accepting jobs.
  • "-v" option specifies the device URI used to communicate with the actual printer.

The printer driver file "foomatic:HP-LaserJet-laserjet.ppd" must be a PPD file available to the print server. PPD files installed on the server can be listed using the "lpinfo -m" command:

[root@crew ~]# lpinfo -m | more
foomatic:Alps-MD-1000-md2k.ppd Alps MD-1000 Foomatic/md2k
foomatic:Alps-MD-1000-ppmtomd.ppd Alps MD-1000 Foomatic/ppmtomd
foomatic:Alps-MD-1300-md1xMono.ppd Alps MD-1300 Foomatic/md1xMono
foomatic:Alps-MD-1300-md2k.ppd Alps MD-1300 Foomatic/md2k
foomatic:Alps-MD-1300-ppmtomd.ppd Alps MD-1300 Foomatic/ppmtomd
...

The existence of the new printer can be verified by checking its status:

[root@host ~]# lpq -Pexamplelm1
examplelm1 is ready
no entries

The "-l" options of the lpstat command can be used to interrogate the details of the queue:

[root@host ~]# lpstat -l -pexamplelm1
printer examplelm1 is idle.  enabled since Fri 30 Aug 2019 02:56:11 PM EDT
    Form mounted:
    Content types: any
    Printer types: unknown
    Description: Example Pick Ticket Printer
    Alerts: none
    Location: Grand Rapids
    Connection: direct
    Interface: /etc/cups/ppd/examplelm1.ppd
    On fault: no alert
    After fault: continue
    Users allowed:
        (all)
    Forms allowed:
        (none)
    Banner required
    Charset sets:
        (none)
    Default pitch:
    Default page size:
    Default port settings:

Delete A Print Queue

A print queue can also be deleted using the same lpadmin command used to create the queue.

[root@host ~]# /usr/sbin/lpadmi -U adam -h cups.example.com:631  -x examplelm1
Password for adam on crew.mormail.com? 
lpadmin: The printer or class was not found.
[root@host ~]# lpq -Pexamplelm1
lpq: Unknown destination "examplelm1"!

Note that deleting the print queue appears to fail; only because the lpadmin command attempts to report the status of the named queue after the operation.

by whitemice at August 30, 2019 07:11 PM

July 25, 2019

Whitemice Consulting

Changing Domain Password

Uh oh, Active Directory password is going to expire!

Ugh, do I need to log into a Windows workstation to change by password?

Nope, it is as easy as:

awilliam@beast01:~> smbpasswd -U DOMAIN/adam  -r example.com
Old SMB password:
New SMB password:
Retype new SMB password:
Password changed for user adam

In this case DOMAIN is the NetBIOS domain name and example.com is the domain's DNS domain. One could also specify a domain controller for -r, however in most cases the bare base domain of an Active Directory backed network will resolve to the active collection of domain controllers.

by whitemice at July 25, 2019 03:29 PM

May 24, 2019

Whitemice Consulting

CRON Jobs Fail To Run w/PAM Error

Added a cron job to a service account's crontab using the standard crontab -e -u ogo command. This server has been chugging away for more than a year, with lots of stuff running within he service account - but nothing via cron.

Subsequently the cron jobs didn't run. :( The error logged in /var/log/cron was:

May 24 14:45:01 purple crond[18909]: (ogo) PAM ERROR (Authentication service cannot retrieve authentication info)

The issue turned out to be that the service account - which is a local account, not something from AD, LDAP, etc... - did not have a corresponding entry in /etc/shaddow. This breaks CentOS7's default PAM stack (specified in /etc/pam.d/crond). The handy utility pwck will fix this issue, after which I the jobs ran without error.

[root@purple ~]# pwck
add user 'ogo' in /etc/shadow? y
pwck: the files have been updated
[root@purple ~]# grep ogo /etc/shadow
ogo:x:18040:0:99999:7:::

by whitemice at May 24, 2019 08:09 PM

April 18, 2019

Whitemice Consulting

MySQL: Reporting Size Of All Tables

This is a query to report the number of rows and the estimated size of all the tables in a MySQL database:

SELECT 
  table_name, 
  table_rows, 
  ROUND(((data_length + index_length) / 1024 / 1024), 2) AS mb_size
FROM information_schema.tables
WHERE table_schema = 'maindb;

Results look like:

table_name                                  table_rows mb_size 
------------------------------------------- ---------- ------- 
mageplaza_seodashboard_noroute_report_issue 314314     37.56   
catalog_product_entity_int                  283244     28.92   
catalog_product_entity_varchar              259073     29.84   
amconnector_product_log_details             178848     6.02    
catalog_product_entity_decimal              135936     16.02   
shipperhq_quote_package_items               115552     11.03   
amconnector_product_log                     114400     767.00  
amconnector_productinventory_log_details    114264     3.52    

This is a very useful query as the majority of MySQL applications are poorly designed; they tend not to clean up after themseves.

by whitemice at April 18, 2019 06:30 PM

April 08, 2019

Whitemice Consulting

Informix: Listing The Locks

The current database locks in an Informix engine are easily enumerated from the sysmaster database.

SELECT 
  TRIM(s.username) AS user, 
  TRIM(l.dbsname) AS database, 
  TRIM(l.tabname) AS table,
  TRIM(l.type) AS type,
  s.sid AS session,
  l.rowidlk AS rowid
FROM sysmaster:syslocks l
  INNER JOIN sysmaster:syssessions s ON (s.sid = l.owner)
WHERE l.dbsname NOT IN('sysmaster')
ORDER BY 1; 

The results are pretty straight forward:

User Database Type Session ID Row ID
extranet maindb site_master IS 436320|0
shuber maindb workorder IS 436353|0
shuber maindb workorder IX 436353|0
shuber maindb workorder_visit IS 436353|0
extranet maindb customer_master IS 436364|0
jkelley maindb workorder IX 436379|0
jkelley maindb workorder IS 436379|0
mwathen maindb workorder IS 436458|0
Tags: 

by whitemice at April 08, 2019 08:10 PM

September 08, 2018

Whitemice Consulting

Reading BYTE Fields From An Informix Unload

Exporting records from an Informix table is simple using the UNLOAD TO command. This creates a delimited text file with a row for each record and the fields of the record delimited by the specified delimiter. Useful for data archive the files can easily be restored or processed with a Python script.

One complexity exists; if the record contains a BYTE (BLOB) field the contents are dumped hex encoded. This is not base64. To read these files take the hex encoded string value and decode it with the faux code-page hex: content.decode("hex")

The following script reads an Informix unload file delimited with pipes ("|") decoding the third field which was of the BYTE type.

rfile = open(ARCHIVE_FILE, 'r')
counter = 0
row = rfile.readline()
while row:
    counter += 1
    print(
        'row#{0} @ offset {1}, len={2}'
        .format(counter, rfile.tell(), len(row), )
    )
    blob_id, content, mimetype, filename, tmp_, tmp_ = row.split('|')
    content = content.decode("hex")
    print('  BLOBid#{0} "{1}" ({2}), len={3}'.format(
        blob_id, filename, mimetype, len(content)
    ))
    if mimetype == 'application/pdf':
        if '/' in filename:
            filename = filename.replace('/', '_')
        wfile = open('wds/{0}.{1}.pdf'.format(blob_id, filename, ), 'wb')
        wfile.write(content)
        wfile.close()

by whitemice at September 08, 2018 08:05 PM

May 29, 2018

Whitemice Consulting

Disabling Transparent Huge Pages in CentOS7

The THP (Transparent Huge Pages) feature of modern LINUX kernels is a boon for on-metal servers with a sufficiently advanced MMU. However they can also result in performance degradation and inefficiently memory use when enabled in a virtual machine [depending on the hypervisor and hosting provider]. See, for example "Use of large pages can cause memory to be fully allocated". If you are issues in a virtualized environment that point towards unexplained memory consumption it may be worthwhile to experiment with disabling THP in your guests. These are instructions for controlling the THP feature through the use of a SystemD unit.

Create the file /etc/systemd/system/disable-thp.service:

[Unit]
Description=Disable Transparent Huge Pages (THP)
[Service]
Type=simple
ExecStart=/bin/sh -c "echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled && echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag"
[Install]
WantedBy=multi-user.target

Enable the new unit:

sudo systemctl daemon-reload
sudo systemctl start disable-thp
sudo systemctl enable disable-thp

THP will now be disabled. However already allocated huge pages are still active. Rebooting the server is advised to bring up the services with THP disabled.

by whitemice at May 29, 2018 07:30 PM

May 06, 2018

Whitemice Consulting

Informix Dialect With CASE Derived Polymorphism

I ran into an interesting issue when using SQLAlchemy 0.7.7 with the Informix dialect. In a rather ugly database (which dates back to the late 1980s) there is a table called "xrefr" that contains two types of records: "supersede" and "cross". What those signify doesn't really matter for this issue so I'll skip any further explanation. But the really twisted part is that while a single field distinquishes between these two record types - it does not do so based on a consistent value. If the value of this field is "S" then the record is a "supersede", any other value (including NULL) means it is a "cross". This makes creating a polymorphic presentation of this schema a bit more complicated. But have no fear, SQLAlchemy is here!

When faced with a similar issue in the past, on top of PostgreSQL, I've created polymorphic presentations using CASE clauses. But when I tried to do this using the Informix dialect the generated queries failed. They raised the dreaded -201 "Syntax error or access violation" message.

The Informix SQLCODE -201 is in the running for "Most useless error message ever!". Currently it is tied with PHP's "Stack Frame 0" message. Microsoft's "File not found" [no filename specified] is no longer in the running as she is being held at the Hague to face war crimes charges.

Rant: Why do developers get away with such lazy error messages?

The original [failing] code that I tried looked something like this:

    class XrefrRecord(Base):
        __tablename__  = 'xrefr'
        record_id      = Column("xr_serial_no", Integer, primary_key=True)
        ....
        _supersede     = Column("xr_supersede", String(1))
        is_supersede   = column_property( case( [ ( _supersede == 'S', 1, ), ],
                                                else_ = 0 ) )

        __mapper_args__ = { 'polymorphic_on': is_supersede }   


    class Cross(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 0} 


    class Supsersede(XrefrRecord): 
        __mapper_args__ = {'polymorphic_identity': 1}

The generated query looked like:

      SELECT xrefr.xr_serial_no AS xrefr_xr_serial_no,
             .....
             CASE
               WHEN (xrefr.xr_supersede = :1) THEN :2 ELSE :3
               END AS anon_1
      FROM xrefr
      WHERE xrefr.xr_oem_code = :4 AND
            xrefr.xr_vend_code = :5 AND
            CASE
              WHEN (xrefr.xr_supersede = :6) THEN :7
              ELSE :8
             END IN (:9) <--- ('S', 1, 0, '35X', 'A78', 'S', 1, 0, 0)

At a glance it would seem that this should work. If you substitute the values for their place holders in an application like DbVisualizer - it works.

The condition raising the -201 error is the use of place holders in a CASE WHEN structure within the projection clause of the query statement; the DBAPI module / Informix Engine does not [or can not] infer the type [cast] of the values. The SQL cannot be executed unless the values are bound to a type. Why this results in a -201 and not a more specific data-type related error... that is beyond my pay-grade.

An existential dilemma: Notice that when used like this in the projection clause the values to be bound are both input and output values.

The trick to get this to work is to explicitly declare the types of the values when constructing the case statement for the polymorphic mapper. This can be accomplished using the literal_column expression.

    from sqlalchemy import literal_column

    class XrefrRecord(Base):
        _supersede    = Column("xr_supersede", String(1))
        is_supersede  = column_property( case( [ ( _supersede == 'S', literal_column('1', Integer) ) ],
                                                   else_ = literal_column('0', Integer) ) )

        __mapper_args__     = { 'polymorphic_on': is_supersede }

Visually if you log or echo the statements they will not appear to be any different than before; but SQLAlchemy is now binding the values to a type when handing the query off to the DBAPI informixdb module.

Happy polymorphing!

by whitemice at May 06, 2018 08:23 PM

Sequestering E-Mail

When testing applications one of the concerns is always that their actions don't effect the real-world. One aspect of that this is sending e-mail; the last thing you want is the application you are testing to send a paid-in-full customer a flurry of e-mails that he owes you a zillion dollars. A simple, and reliable, method to avoid this is to adjust the Postfix server on the host used for testing to bury all mail in a shared folder. This way:

  • You don't need to make any changes to the application between production and testing.
  • You can see the message content exactly as it would ordinarily have been delivered.

To accomplish this you can use Postfix's generic address rewriting feature; generic address rewriting processes addresses of messages sent [vs. received as is the more typical case for address rewriting] by the service. For this example we'll rewrite every address to shared+myfolder@example.com using a regular expression.

Step#1

Create the regular expression map. Maps are how Postfix handles all rewriting; a match for the input address is looked for in the left hand [key] column and rewritten in the form specified by the right hand [value] column.

echo "/(.)/           shared+myfolder@example.com" > /etc/postfix/generic.regexp

Step#2

Configure Postfix to use the new map for generic address rewriting.

postconf -e smtp_generic_maps=regexp:/etc/postfix/generic.regexp

Step#3

Tell Postfix to reload its configuration.

postfix reload

Now any mail, to any address, sent via the hosts' Postfix service, will be driven not to the original address but to the shared "myfolder" folder.

by whitemice at May 06, 2018 08:11 PM

April 22, 2018

Whitemice Consulting

LDAP extensibleMatch

One of the beauties of LDAP is how simply it lets the user or application perform searching. The various attribute types hint how to intelligently perform searches such as case sensitivity with strings, whether dashes should be treated as relevant characters in the case of phone numbers, etc... However, there are circumstances when you need to override this intelligence and make your search more or less strict. For example: in the case of case sensitivity of a string. That is the purpose of the extensibleMatch.

Look at this bit of schema:

attributetype ( 2.5.4.41 NAME 'name'
EQUALITY caseIgnoreMatch
SUBSTR caseIgnoreSubstringsMatch
SYNTAX 1.3.6.1.4.1.1466.115.121.1.15{32768} )
attributetype ( 2.5.4.4 NAME ( 'sn' 'surname' )
DESC 'RFC2256: last (family) name(s) for which the entity is known by'
SUP name )

The caseIgnoreMatch means that searches on attribute "name", or its descendant "sn" (used in the objectclass inetOrgPerson), are performed in a case insensitive manner. So...

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam sn=williams dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
SASL SSF: 128
SASL installing layers
# Adam Williams, People, Entities, SAM, whitemice.org
dn: cn=Adam Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org
# Michelle Williams, People, Entities, SAM, whitemice.org
dn: cn=Michelle Williams,ou=People,ou=Entities,ou=SAM,dc=whitemice,dc=org

... this search returns two objects where the sn value is "Williams" even though the search string was "williams".

If for some reason we want to match just the string "Williams", and not the string "williams" we can use the extensibleMatch syntax.

estate1:~ # ldapsearch -Y DIGEST-MD5 -U awilliam "(sn:caseExactMatch:=williams)" dn
SASL/DIGEST-MD5 authentication started
Please enter your password:
SASL username: awilliam
search: 3
result: 0 Success
estate1:~ #

No objects found as both objects have "williams" with an initial capital letter.

Using extensibleMatch I was able to match the value of "sn" with my own preference regarding case sensitivity. The system for an extensibleMatch is "({attributename}:{matchingrule}:{filterspec})". This can be used inside a normal LDAP filter along with 'normal' matching expressions.

For more information on extensibleMatch see RFC2252 and your DSA's documentation [FYI: Active Directory is a DSA (Directory Service Agent), as is OpenLDAP, or

by whitemice at April 22, 2018 03:14 PM

Android, SD cards, and exfat

I needed to prepare some SD cards for deployment to Android phones. After formatting the first SD card in a phone I moved it to my laptop and was met with the "Error mounting... unknown filesystem type exfat" error. That was somewhat startling as GVFS gracefully handles almost anything you throw at it. Following this I dropped down to the CLI to inspect how the SD card was formatted.

awilliam@beast01:~> sudo fdisk -l /dev/mmcblk0
Disk /dev/mmcblk0: 62.5 GiB, 67109912576 bytes, 131074048 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start       End   Sectors  Size Id Type
/dev/mmcblk0p1 *     2048 131074047 131072000 62.5G  7 HPFS/NTFS/exFAT

Seeing the file-system type I guessed that I was missing support for the hack that is exFAT [exFAT is FAT tweaked use on large SD cards]. A zypper search exfat found two uninstalled packages; GVFS is principally an encapsulation of fuse that adds GNOME awesome into the experience - so the existence of a package named "fuse-exfat" looked promising.

I installed the two related packages:

awilliam@beast01:~> sudo zypper in exfat-utils fuse-exfat
(1/2) Installing: exfat-utils-1.2.7-5.2.x86_64 ........................[done]
(2/2) Installing: fuse-exfat-1.2.7-6.2.x86_64 ........................[done]
Additional rpm output:
Added 'exfat' to the file /etc/filesystems
Added 'exfat_fuse' to the file /etc/filesystems

I removed the SD card from my laptop, reinserted it, and it mounted. No restart of anything required. GVFS rules! At this point I could move forward with rsync'ing the gigabytes of documents onto the SD card.

It is also possible to initially format the card in the openSUSE laptop as well. Partition the card creating a partition of type "7" and then use mkfs.exfat to format the partition. Be careful to give each card a unique ID using the -n option.

awilliam@beast01:~> sudo mkfs.exfat  -n 430E-2980 /dev/mmcblk0p1
mkexfatfs 1.2.7
Creating... done.
Flushing... done.
File system created successfully.

The mkfs.exfat command is provided by the exfat-utils package; a filesystem-utils package exists for most (all?) supported file-ystems. These -utils packages provide the various commands to create, check, repair, or tune the eponymous file-ystem type.

by whitemice at April 22, 2018 02:34 PM

April 03, 2018

Whitemice Consulting

VERR_PDM_DEVHLPR3_VERSION_MISMATCH

After downloading a Virtualbox ready ISO of OpenVAS the newly created virtual machine to host the instance failed to start with an VERR_PDM_DEVHLPR3_VERSION_MISMATCH error. The quick-and-dirty solution was to set the instance to use USB 1.1. This setting is changed under Machine -> Settings -> USB -> Select USB 1.1 OHCI Controller.. After that change the instance now boots and runs the installer.

virtualbox-qt-5.1.34-47.1.x86_64
virtualbox-5.1.34-47.1.x86_64
virtualbox-host-kmp-default-5.1.34_k4.4.120_45-47.1.x86_64
kernel-default-4.4.120-45.1.x86_64
openSUSE 42.3 (x86_64)

by whitemice at April 03, 2018 12:21 PM

March 11, 2018

Whitemice Consulting

AWESOME: from-to Change Log viewer for PostgreSQL

Upgrading a database is always a tedious process - a responsible administrator will have to read through the Changelog for every subsequent version from the version ze is upgrading from to the one ze is upgrading to.

Then I found this! This is a Changelog viewer which allows you to select a from and a to version and shows you all the changelogs in between; on one page. You still have to read it, of course, but this is a great time saver.

by whitemice at March 11, 2018 01:15 AM

January 17, 2018

Whitemice Consulting

Discovering Informix Version Via SQL

It is possible using the dbinfo function to retrieve the engine's version information via an SQL command:

select dbinfo('version','full') from sysmaster:sysdual

which will return a value like:

IBM Informix Dynamic Server Version 12.10.FC6WE
Tags: 

by whitemice at January 17, 2018 08:56 PM

October 09, 2017

Whitemice Consulting

Failure to apply LDAP pages results control.

On a particular instance of OpenGroupware Coils the switch from an OpenLDAP server to an Active Directory service - which should be nearly seamless - resulted in "Failure to apply LDAP pages results control.". Interesting, as Active Directory certainly supports paged results - the 1.2.840.113556.1.4.319 control.

But there is a caveat! Of course.

Active Directory does not support the combination of the paged control and referrals in some situations. So to reliably get the page control enable it is also necessary to disable referrals.

...
dsa = ldap.initialize(config.get('url'))
dsa.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
dsa.set_option(ldap.OPT_REFERRALS, 0)
....

Disabling referrals is likely what you want anyway, unless you are going to implement referral following. Additionally, in the case of Active Directory the referrals rarely reference data which an application would be interested in.

The details of Active Directory and pages results + referrals can be found here

by whitemice at October 09, 2017 03:03 PM

August 31, 2017

Whitemice Consulting

opensuse 42.3

Finally got around to updating my work-a-day laptop to openSUSE 42.3. As usual I did an in-place distribution update via zypper. This involves replacing the previous version repositories with the current version repositories - and then performing a dup. And as usual the process was quick and flawless. After a reboot everything just-works and I go back to doing useful things. This makes for an uninteresting BLOG post, which is as it should be.

zypper lr --url
zypper rr http-download.opensuse.org-f7da6bb3
zypper rr packman
zypper rr repo-non-oss
zypper rr repo-oss
zypper rr repo-update-non-oss
zypper rr repo-update-oss
zypper rr server:mail
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/non-oss/ repo-non-oss
zypper ar http://download.opensuse.org/distribution/leap/42.3/repo/oss/ repo-oss
zypper ar http://download.opensuse.org/repositories/server:/mail/openSUSE_Leap_42.3/ server:mail
zypper ar http://download.opensuse.org/update/leap/42.3/non-oss/ repo-update-non-oss
zypper ar http://download.opensuse.org/update/leap/42.3/oss/ repo-update-oss
zypper ar http://ftp.gwdg.de/pub/linux/misc/packman/suse/openSUSE_Leap_42.3 packman
zypper lr --url  # double check
zypper ref  # refesh
zypper dup --download-in-advance  # distribution update
zypper up  # update, just a double check
reboot

Done.

by whitemice at August 31, 2017 12:49 PM

June 06, 2017

Whitemice Consulting

LDAP Search For Object By SID

All the interesting objects in an Active Directory DSA have an objectSID which is used throughout the Windows subsystems as the reference for the object. When using a Samba4 (or later) domain controller it is possible to simply query for an object by its SID, as one would expect - like "(&(objectSID=S-1-...))". However, when using a Microsoft DC searching for an object by its SID is not as straight-forward; attempting to do so will only result in an invalid search filter error. Active Directory stores the objectSID as a binary value and one needs to search for it as such. Fortunately converting the text string SID value to a hex string is easy: see the guid2hex(text_sid) below.

import ldap
import ldap.sasl
import ldaphelper

PDC_LDAP_URI = 'ldap://pdc.example.com'
OBJECT_SID = 'S-1-5-21-2037442776-3290224752-88127236-1874'
LDAP_ROOT_DN = 'DC=example,DC=com'

def guid2hex(text_sid):
    """convert the text string SID to a hex encoded string"""
    s = ['\\{:02X}'.format(ord(x)) for x in text_sid]
    return ''.join(s)

def get_ldap_results(result):
    return ldaphelper.get_search_results(result)

if __name__ == '__main__':

    pdc = ldap.initialize(PDC_LDAP_URI)
    pdc.sasl_interactive_bind_s("", ldap.sasl.gssapi())
    result = pdc.search_s(
        LDAP_ROOT_DN, ldap.SCOPE_SUBTREE,
        '(&(objectSID={0}))'.format(guid2hex(OBJECT_SID), ),
        [ '*', ]
    )
    for obj in [x for x in get_ldap_results(result) if x.get_dn()]:
        """filter out objects lacking a DN - they are LDAP referrals"""
        print('DN: {0}'.format(obj.get_dn(), ))

    pdc.unbind()

by whitemice at June 06, 2017 12:11 AM

March 07, 2017

Whitemice Consulting

KDC reply did not match expectations while getting initial credentials

Occasionally one gets reminded of something old.

[root@NAS04256 ~]# kinit adam@example.com
Password for adam@Example.Com: 
kinit: KDC reply did not match expectations while getting initial credentials

Huh.

[root@NAS04256 ~]# kinit adam@EXAMPLE.COM
Password for adam@EXAMPLE.COM:
[root@NAS04256 ~]# 

In some cases the case of the realm name matters.

by whitemice at March 07, 2017 02:18 PM

February 09, 2017

Whitemice Consulting

The BOM Squad

So you have a lovely LDIF file of Active Directory schema that you want to import using the ldbmodify tool provided with Samba4... but when you attempt the import it fails with the error:

Error: First line of ldif must be a dn not 'dn'
Modified 0 records with 0 failures

Eh? @&^$*&;@&^@! It does start with a dn: attribute it is an LDIF file!

Once you cool down you look at the file using od, just in case, and you see:

0000000   o   ;   ?   d   n   :  sp   c   n   =   H   o   r   d   e   -

The first line does not actually begin with "dn:" - it starts with the "o;?". You've been bitten by the BOM! But even opening the file in vi you cannot see the BOM because every tool knows about the BOM and deals with it - with the exception of anything LDIF related.

The fix is to break out dusty old sed and remove the BOM -

sed -e '1s/^\xef\xbb\xbf//' horde-person.ldf  > nobom.ldf

And double checking it with od again:

0000000   d   n   :  sp   c   n   =   H   o   r   d   e   -   A   g   o

The file now actually starts with a "dn" attribute!

by whitemice at February 09, 2017 12:09 PM

Installation & Initialization of PostGIS

Distribution: CentOS 6.x / RHEL 6.x

If you already have a current version of PostgreSQL server installed on your server from the PGDG repository you should skip these first two steps.

Enable PGDG repository

curl -O http://yum.postgresql.org/9.3/redhat/rhel-6-x86_64/pgdg-centos93-9.3-1.noarch.rpm
rpm -ivh pgdg-centos93-9.3-1.noarch.rpm

Disable all PostgreSQL packages from the distribution repositories. This involves editing the /etc/yum.repos.d/CentOS-Base.repo file. Add the line "exclude=postgresql*" to both the "[base]" and "[updates]" stanzas. If you skip this step everything will appear to work - but in the future a yum update may break your system.

Install PostrgreSQL Server

yum install postgresql93-server

Once installed you need to initialize and start the PostgreSQL instance

service postgresql-9.3 initdb
service postgresql-9.3 start

If you wish the PostgreSQL instance to start with the system at book use chkconfig to enable it for the current runlevel.

chkconfig postgresql-9.3 on

The default data directory for this instance of PostgreSQL will be "/var/lib/pgsql/9.3/data". Note: that this path is versioned - this prevents the installation of a downlevel or uplevel PostgreSQL package destroying your database if you do so accidentally or forget to follow the appropriate version migration procedures. Most documentation will assume a data directory like "/var/lib/postgresql" [notably unversioned]; simply keep in mind that you always need to contextualize the paths used in documentation to your site's packaging and provisioning. Enable EPEL Repository

The EPEL repository provides a variety of the dependencies of the PostGIS packages provided by the PGDG repository.

curl -O http://epel.mirror.freedomvoice.com/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm

Installing PostGIS

The PGDG package form PostGIS should now install without errors.

yum install postgis2_93

If you do not have EPEL successfully enables when you attempt to install the PGDG PostGIS packages you will see dependency errors.

--->; Package postgis2_93-client.x86_64 0:2.1.1-1.rhel6 will be installed
--> Processing Dependency: libjson.so.0()(64bit) for package: postgis2_93-client-2.1.1-1.rhel6.x86_64
--> Finished Dependency Resolution
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libcfitsio.so.0()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
           Requires: libspatialite.so.2()(64bit)
Error: Package: gdal-libs-1.9.2-4.el6.x86_64 (pgdg93)
...

Initializing PostGIS

The template database "template_postgis" is expected to exist by many PostGIS applications; but this database is not created automatically.

su - postgres
createdb -E UTF8 -T template0 template_postgis
-- ... See the following note about enabling plpgsql ...
psql template_postgis
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/postgis.sql
psql -d template_postgis -f /usr/pgsql-9.3/share/contrib/postgis-2.1/spatial_ref_sys.sql 

Using the PGDG packages the PostgreSQL plpgsql embedded language, frequently used to develop stored procedures, is enabled in the template0 database from which the template_postgis database is derived. If you are attempting to use other PostgreSQL packages, or have built PostgreSQL from source [are you crazy?], you will need to ensure that this language is enabled in your template_postgis database before importing the scheme - to do so run the following command immediately after the "createdb" command. If you see the error stating the language is already enabled you are good to go, otherwise you should see a message stating the language was enabled. If creating the language fails for any other reason than already being enabled you must resolve that issue before proceeding to install your GIS applications.

$ createlang -d template_postgis plpgsql
createlang: language "plpgsql" is already installed in database "template_postgis"

Celebrate

PostGIS is now enabled in your PostgreSQL instance and you can use and/or develop exciting new GIS & geographic applications.

by whitemice at February 09, 2017 11:43 AM

February 03, 2017

Whitemice Consulting

Unknown Protocol Drops

I've seen this one a few times and it is always momentarily confusing: on an interface on a Cisco router there is a rather high number of "unknown protocol drops". What protocol could that be?! Is it some type of hack attempt? Ambitious if they are shaping there own raw packets onto the wire. But, no, the explanation is the much less exciting, and typical, lazy ape kind of error.

  5 minute input rate 2,586,000 bits/sec, 652 packets/sec
  5 minute output rate 2,079,000 bits/sec, 691 packets/sec
     366,895,050 packets input, 3,977,644,910 bytes
     Received 15,91,926 broadcasts (11,358 IP multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog
     0 input packets with dribble condition detected
     401,139,438 packets output, 2,385,281,473 bytes, 0 underruns
     0 output errors, 0 collisions, 3 interface resets
     97,481 unknown protocol drops  <<<<<<<<<<<<<<
     0 babbles, 0 late collision, 0 deferred

This is probably the result of CDP (Cisco Discovery Protocol) being enabled on one interface on the network and disabled in this interface. CDP is the unknown protocol. CDP is a proprietary Data Link layer protocol, that if enabled, sends an announcement out the interface every 60 seconds. If the receiving end gets the CDP packet and has "no cdp enable" in the interface configuration - those announcements count as "unknown protocol drops". The solution is to make the CDP settings, enabled or disabled, consistent on every device in the interface's scope.

by whitemice at February 03, 2017 06:32 PM

Screen Capture & Recording in GNOME3

GNOME3, aka GNOME Shell, provides a comprehensive set of hot-keys for capturing images from your screen as well as recording your desktop session. These tools are priceless for producing documentation and reporting bugs; recording your interaction with an application is much easier than describing it.

  • Alt + Print Screen : Capture the current window to a file
  • Ctrl + Alt + Print Screen : Capture the current window to the cut/paste buffer
  • Shift + Print Screen : Capture a selected region of the screen to a file
  • Ctrl + Shift + Print Screen : Capture a selected region of the screen to the cut/paste buffer
  • Print Screen : Capture the entire screen to a file
  • Ctrl + Print Screen : Capture the entire screen to the cut/paste buffer
  • Ctrl + Alt + Shift + R : Toggle screencast recording on and off.

Recorded video is in WebM format (VP8 codec, 25fps). Videos are saved to the ~/Videos folder and image files are saved in PNG format into the ~/Pictures folder. When screencast recording is enabled there will be a red recording indicator in the bottom right of the screen, this indicator will disappear one screencasting is toggled off again.

by whitemice at February 03, 2017 06:29 PM

Converting a QEMU Image to a VirtualBox VDI

I use VirtualBox for hosting virtual machines on my laptop and received a Windows 2008R2 server image from a consultant as a compressed QEMU image. So how to convert the QEMU image to a VirtualBox VDI image?

Step#1: Convert QEMU image to raw image.

Starting with the file WindowsServer1-compressed.img (size: 5,172,887,552)

Convert the QEMU image to a raw/dd image using the qemu-img utility.

emu-img convert  WindowsServer1-compressed.img  -O raw  WindowsServer1.raw

I now have the file WindowsServer1.raw (size: 21,474,836,480)

Step#2: Convert the RAW image into a VDI image using the VBoxManage tool.

VBoxManage convertfromraw WindowsServer1.raw --format vdi  WindowsServer1.vdi
Converting from raw image file="WindowsServer1.raw" to file="WindowsServer1.vdi"...
Creating dynamic image with size 21474836480 bytes (20480MB)...

This takes a few minutes, but finally I have the file WindowsServer1.vdi (size: 14,591,983,616)

Step#3: Compact the image

Smaller images a better! It is likely the image is already compact; however this also doubles as an integrity check.

VBoxManage modifyhd WindowsServer1.vdi --compact
0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%

Sure enough the file is the same size as when we started (size: 14,591,983,616). Upside is the compact operation went through the entire image without any errors.

Step#4: Cleanup and make a working copy.

Now MAKE A COPY of that converted file and use that for testing. Set the original as immutable [chattr +i] to prevent that being used on accident. I do not want to waste time converting the original image again.

Throw away the intermediate raw image and compress the image we started with for archive purposes.

rm WindowsServer1.raw 
cp WindowsServer1.vdi WindowsServer1.SCRATCH.vdi 
sudo chattr +i WindowsServer1.vdi
bzip2 -9 WindowsServer1-compressed.img 

The files at the end:

File Size
WindowsServer1-compressed.img.bz2 5,102,043,940
WindowsServer1.SCRATCH.vdi 14,591,983,616
WindowsServer1.vdi 14,591,983,616

Step#5

Generate a new UUID for the scratch image. This is necessary anytime a disk image is duplicated. Otherwise you risk errors like "Cannot register the hard disk '/archive/WindowsServer1.SCRATCH.vdi' {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} because a hard disk '/archive/WindowsServer1.vdi' with UUID {6ac7b91f-51b6-4e61-aa25-8815703fb4d7} already exists" as you move images around.

VBoxManage internalcommands sethduuid WindowsServer1.SCRATCH.vdi
UUID changed to: ab9aa5e0-45e9-43eb-b235-218b6341aca9

Generating a unique UUID guarantees that VirtualBox is aware that these are distinct disk images.

Versions: VirtualBox 5.1.12, QEMU Tools 2.6.2. On openSUSE LEAP 42.2 the qemu-img utility is provided by the qemu-img package.

by whitemice at February 03, 2017 02:36 PM

January 24, 2017

Whitemice Consulting

XFS, inodes, & imaxpct

Attempting to create a file on a large XFS filesystem - and it fails with an exception indicating insufficient space! There is available blocks - df says so. HUh? While, unlike traditional UNIX filesystems, XFS doesn't suffer from the boring old issue of "inode exhaustion" it does have inode limits - based on a percentage of the filesystem size.

linux-yu4c:~ # xfs_info /mnt
meta-data=/dev/sdb1              isize=256    agcount=4, agsize=15262188 blks
         =                       sectsz=512   attr=2
data     =                       bsize=4096   blocks=61048752, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0
log      =internal               bsize=4096   blocks=29808, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0

The key is that "imaxpct" value. In this example inode's are limited to 25% of the filesystems capacity. That is a lot of inodes! But some tools and distributions may default that percentage to some much lower value - like 5% or 10% (for what reason I don't know). This value can be determined at filesystem creation time using the "-i maxpct=nn" option or adjusted later using the xfs_growfs command's "-m nn" command. So if you have an XFS filesystem with available capacity that is telling you it is full: check your "imaxpct" value, then grow the inode percentage limit.

by whitemice at January 24, 2017 07:59 PM

Changing FAT Labels

I use a lot of SD cards and USB thumb-drives; when plugging in these devices automount in /media as either the file-system label (if set) or some arbitrary thing like /media/disk46. So how can one modify or set the label on an existing FAT filesystem? Easy as:

mlabel -i /dev/mmcblk0p1 -s ::WMMI06  
Volume has no label 
mlabel -i /dev/mmcblk0p1  ::WMMI06
mlabel -i /dev/mmcblk0p1 -s :: 
Volume label is WMMI06

mlabel -i /dev/sdb1 -s ::
Volume label is Cruzer
mlabel -i /dev/sdb1  ::DataCruzer
mlabel -i /dev/sdb1 -s ::
Volume label is DataCruzer (abbr=DATACRUZER )

mlabel is provided by the mtools package. Since we don't have a drive letter the "::" is used to defer to the actual device specified using the "-i" directive. The "-s" directive means show, otherwise the command attempts to set the label to the value immediately following (no whitespace!) the drive designation [default behavior is to set, not show].

by whitemice at January 24, 2017 07:51 PM

Deduplicating with group_by, func.min, and having

You have a text file with four million records and you want to load this data into a table in an SQLite database. But some of these records are duplicates (based on certain fields) and the file is not ordered. Due to the size of the data loading the entire file into memory doesn't work very well. And due to the number of records doing a check-at-insert when loading the data is also prohibitively slow. But what does work pretty well is just to load all the data and then deduplicate it. Having an auto-increment record id is what makes this possible.

class VendorSKU(scratch_base):
    __tablename__ = 'sku'
    id      = Column(Integer, primary_key=True, autoincrement=True)
...

Once all the data gets loaded into the table the deduplication is straight-forward using minimum and group by.

query = scratch.query(
    func.min( VendorCross.id ),
    VendorCross.sku,
    VendorCross.oem,
    VendorCross.part ).filter(VendorCross.source == source).group_by(
        VendorCross.sku,
        VendorCross.oem,
        VendorCross.part ).having(
            func.count(VendorCross.id) > 1 )
counter = 0
for (id, sku, oem, part, ) in query.all( ):
    counter += 1
    scratch.query(VendorCross).filter(
        and_(
            VendorCross.source == source, 
            VendorCross.sku == sku,
            VendorCross.oem == oem,
            VendorCross.part == part,
            VendorCross.id != id ) ).delete( )
    if not (counter % 1000):
        # Commit every 1,000 records, SQLite does not like big transactions
        scratch.commit()
scratch.commit()

This incantation removes all the records from each group except for the one with the lowest id. The trick for good performance is to batch many deletes into each transaction - only commit every so many [in this case 1,000] groups processed; just also remember to commit at the end to catch the deletes from the last iteration.

by whitemice at January 24, 2017 07:45 PM

AIX Printer Migration

There are few things in IT more utterly and completely baffling than the AIX printer subsystem.  While powerful it accomplishes its task with more arcane syntax and scattered settings files than anything else I have encountered. So the day inevitably comes when you face the daunting task of copying/recreating several hundred print queues from some tired old RS/6000 we'll refer to as OLDHOST to a shiny new pSeries known here as NEWHOST.  [Did you know the bar Stellas in downtown Grand Rapids has more than 200 varieties of whiskey on their menu?  If you've dealt with AIX's printing subsystem you will understand the relevance.] To add to this Sisyphean task the configuration of those printers have been tweaked, twiddled and massaged individually for years - so that rules out the wonderful possibility of saying to some IT minion "make all these printers, set all the settings exactly the same" [thus convincing the poor sod to seek alternate employment, possibly as a bar-tender at the aforementioned Stellas].

Aside: Does IBM really truly not provide a migration technique?  No. Seriously, yeah. 

But I now present to you the following incantation [to use at your own risk]:

scp root@OLDHOST:/etc/qconfig /etc/qconfig
stopsrc -cg spooler
startsrc -g spooler
rsync --recursive --owner --group --perms \
  root@OLDHOST:/var/spool/lpd/pio/@local/custom/ \
  /var/spool/lpd/pio/@local/custom/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/dev/ \
  /var/spool/lpd/pio/@local/dev/
rsync --recursive --owner --group --perms  \
  root@OLDHOST:/var/spool/lpd/pio/@local/ddi/ \
  /var/spool/lpd/pio/@local/ddi/
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*
enq -d
cd  /var/spool/lpd/pio/@local/custom
for FILE in `ls`
 do
   /usr/lib/lpd/pio/etc/piodigest $FILE 
 done
chown root:printq /var/spool/lpd/pio/@local/custom/*
chown root:printq /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/ddi/*
chmod 664 /var/spool/lpd/pio/@local/custom/*

Execute this sequence on NEWHOST and the print queues and their configurations will be "migrated". 

NOTE#1: This depends on all those print queues being network attached printers.  If the system has direct attached printers that correspond to devices such as concentrators, lion boxes, serial ports, SCSI buses,.... then please do not do this, you are on your own.  Do not call me, we never talked about this.

NOTE#2: This will work once.  If you've then made changes to printer configuration or added/removed printers do not do it again.  If you want to do it again first delete ALL the printers on NEWHOST.  Then reboot, just to be safe.  At least stop and start the spooler service after deleting ALL the printer queues.

NOTE#3: I do not endorse, warranty, or stand behind this method of printer queue migration.  It is probably a bad idea.  But the entire printing subsystem in AIX is a bad idea, sooo.... If this does not work do not call me; we never talked about this.

by whitemice at January 24, 2017 11:46 AM

The source files could not be found.

I have several Windows 2012 VMs in a cloud environment and discovered I am unable to install certain roles / features. Attempting to do so fails with an "The source files could not be found." error. This somewhat misleading messages indicates Windows is looking for the OS install media. Most of the solutions on the Interwebz for working around this error describe how to set the server with an alternate path to the install media ... problem being that these VMs were created from a pre-activated OVF image and there is no install media available from the cloud's library.

Lacking install media the best solution is to set the server to skip the install media and grab the files from Windows Update.

  1. Run "gpedit.msc"
  2. "Local Computer Policy"
  3. "Administrative Templates"
  4. "System"
  5. Enable "Specify settings for optional component installation and component repair"
  6. Check the "Contact Windows Update directory to download repair content instead of Windows Server Update Services (WSUS)"

Due to technical limitations WSUS cannot be utilized for this purpose; which is sad given that there is a WSUS server sitting in the same cloud. :(

by whitemice at January 24, 2017 11:31 AM

October 03, 2016

Whitemice Consulting

Playing With Drive Images

I purchased a copy of Windows 10 on a USB thumbdrive. I chose to have media to have (a) a backup and (b) not to have to bother with downloading a massive image. Primarily this copy of Windows will be used in VirtualBox for testing, using Power Shell, and other tedious system administrivia. First thing when it arrived is I used dd to make a full image of thumbdrive so I could tuck it away in a safe place.

dd if=/dev/sde of=Windows10.Thumbdrive.20160918.dd bs=512

But now the trick is to take that raw image and convert it to a VMDK so that it can be attached to a virtual machine. The VBoxManage command provides this functionality:

VBoxManage internalcommands createrawvmdk -filename Windows10.vmdk -rawdisk Windows10.Thumbdrive.20160918.dd

Now I have a VMDK file. If you do this you will notice the VMDK file is small - it is essentially a pointer to the disk image; the purpose of the VMDK is to provide the meta-data necessary to make the hypervisor (in this case VirtualBox) happy. Upshot of that is that you cannot delete the dd image as it is part of your VMDK.

Note that this dd file is a complete disk image; including the partition table:

awilliam@beast01:/vms/ISOs> /usr/sbin/fdisk -l Windows10.Thumbdrive.20160918.dd
Disk Windows10.Thumbdrive.20160918.dd: 14.4 GiB, 15502147584 bytes, 30277632 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device                            Boot Start      End  Sectors  Size Id Type
Windows10.Thumbdrive.20160918.dd 1 *     2048 30277631 30275584 14.4G  c W95 FAT3

So if I wanted to mount that partition on the host operating system I can do that my calculating the offset and mounting through loopback. The offset to the start of the partition within the drive image is the start multiplied by the sector size: 512 * 2,048 = 1048576. The mount command provides support for offset mounting:

beast01:/vms/ISOs $ sudo mount -o loop,ro,offset=1048576 Windows10.Thumbdrive.20160918.dd /mnt
beast01:/vms/ISOs # ls /mnt
83561421-11f5-4e09-8a59-933aks71366.ini  boot     bootmgr.efi  setup.exe                  x64
autorun.inf                              bootmgr  efi          System Volume Information  x86
beast01:/vms/ISOs $ sudo umount /mnt

If all I wanted was the partition, and not the drive, the same offset logic could be used to lift the partition out of the image into a distinct file:

dd if=Windows10.Thumbdrive.20160918.dd of=Windows10.image bs=512 skip=2048

The "Windows10.image" file could be mounted via loopback without bothering with an offset. It might however be more difficult to get a virtual host to boot from a FAT partition that does not have a partition table.

by whitemice at October 03, 2016 10:43 AM

September 15, 2016

Whitemice Consulting

Some Informix DATETIME/INTERVAL Tips

Determine the DATE of the first day of the current week.

(SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)

Informix always treats Sunday as day 0 of the week. The WEEKDAY function returns the number of the day of the week as a value of 0 - 6 so subtracting the weekday from current day (TODAY) returns the DATE value of Sunday of the current week.

Determining HOURS between two DATETIME values.

It is all about the INTERVAL data type and its rather odd syntax.

SELECT mpr.person_id, mpr.cn_name, 
  ((SUM(out_time - in_time))::INTERVAL HOUR(9) TO HOUR) AS hours
FROM service_time_card stc
  INNER JOIN morrisonpersonr mpr ON (mpr.person_id = stc.technician_id)
WHERE mpr.person_id IN (SELECT person_id FROM branch_membership WHERE branch_code = 'TSC')
  AND in_time > (SELECT TODAY - (WEEKDAY(TODAY)) UNITS DAY FROM systables WHERE tabid=1)  
GROUP BY 1,2

The "(9)" part of the expression INTERVAL HOUR(9) TO HOUR is key - it allocates lots of room for hours, otherwise any value of more than a trivial number of hours will cause the clearly correct by not helpful SQL -1265 error: "Overflow occurred on a datetime or interval operation". As, in my case I had a highest value of 6,483 hours I needed at least HOUR(4) TO HOUR to avoid the overflow error. HOUR(9) is the maximum - an expression of HOUR(10) results in an unhelpful generic SQL -201: "A syntax error has occurred.". On the other hand HOURS(9) is 114,155 years and some change, so... it is doubtful that is going to be a problem in most applications.

by whitemice at September 15, 2016 07:46 PM

August 28, 2015

Ben Rousch's Cluster of Bleep

Kivy – Interactive Applications and Games in Python, 2nd Edition Review

I was recently asked by the author to review the second edition of “Kivy – Interactive Applications in Python” from Packt Publishing. I had difficulty recommending the first edition mostly due to the atrocious editing – or lack thereof – that it had suffered. It really reflected badly on Packt, and since it was the only Kivy book available, I did not want that same inattention to quality to reflect on Kivy. Packt gave me a free ebook copy of this book in exchange for agreeing to do this review.

At any rate, the second edition is much improved over the first. Although a couple of glaring issues remain, it looks like it has been visited by at least one native English speaking editor. The Kivy content is good, and I can now recommend it for folks who know Python and want to get started with Kivy. The following is the review I posted to Amazon:

This second edition of “Kivy – Interactive Applications and Games in Python” is much improved from the first edition. The atrocious grammar throughout the first edition book has mostly been fixed, although it’s still worse than what I expect from a professionally edited book. The new chapters showcase current Kivy features while reiterating how to build a basic Kivy app, and the book covers an impressive amount material in its nearly 185 pages. I think this is due largely to the efficiency and power of coding in Python and Kivy, but also to the carefully-chosen projects the author selected for his readers to create. Despite several indentation issues in the example code and the many grammar issues typical of Packt’s books, I can now recommend this book for intermediate to experienced Python programmers who are looking to get started with Kivy.

Chapter one is a good, quick introduction to a minimal Kivy app, layouts, widgets, and their properties.

Chapter two is an excellent introduction and exploration of basic canvas features and usage. This is often a difficult concept for beginners to understand, and this chapter handles it well.

Chapter three covers events and binding of events, but is much denser and difficult to grok than chapter two. It will likely require multiple reads of the chapter to get a good understanding of the topic, but if you’re persistent, everything you need is there.

Chapter four contains a hodge-podge of Kivy user interface features. Screens and scatters are covered well, but gestures still feel like magic. I have yet to find a good in-depth explanation of gestures in Kivy, so this does not come as a surprise. Behaviors is a new feature in Kivy and a new section in this second edition of the book. Changing default styles is also covered in this chapter. The author does not talk about providing a custom atlas for styling, but presents an alternative method for theming involving Factories.

In chapter six the author does a good job of covering animations, and introduces sounds, the clock, and atlases. He brings these pieces together to build a version of Space Invaders, in about 500 lines of Python and KV. It ends up a bit code-dense, but the result is a fun game and a concise code base to play around with.

In chapter seven the author builds a TED video player including subtitles and an Android actionbar. There is perhaps too much attention paid to the VideoPlayer widget, but the resulting application is a useful base for creating other video applications.

by brousch at August 28, 2015 01:16 AM

August 06, 2015

Whitemice Consulting

Cut-N-Paste Options Greyed Out In Excel

Yesterday I encountered a user who could not cut-and-paste in Microsoft Excel. The options to Cut, Copy, and Paste where disabled - aka 'greyed out' - in the menus. Seems like an odd condition.

The conclusion is that Excel's configuration had become corrupted. Resolution involves exiting Excel, deleting Excel's customized configuration, and then restarting the application. Lacking a set of configuration files the application regenerates a new default configuration and cut-and-paste functionality is restored.

Excel stores its per-user configuration in XLB files in the %%PROFILEDIR%%\AppData\Roaming\Microsoft\Excel folder. Navigate to this folder and delete all the XLB files - with all Microsoft Office applications shutdown.

After resolving this issue I found a more user approachable solution - no diddling in the file-system - but with Excel now working I was not able to verify it [and I do not know how to deliberately corrupt Excel's configuration].

  1. Right click on a sheet tab and select "View Code"
  2. From the "View" menu select "Immediate Window" if it's not already displayed.
  3. Paste the following into the "Immediate Window" and press enter: Commandbars("Cell").Reset

Of course, deleting the per-user configuration in Excel will delete the user's customizations.

by whitemice at August 06, 2015 11:06 AM

May 19, 2015

Whitemice Consulting

Which Application?

Which application manages this type of file? How can I, by default, open files of type X with application Y? These questions float around in GNOME forums and mailing lists on a regular basis.

The answer is: gvfs-mime .

To determine what application by default opens a file of a given type, as well as what other applications are installed which register support for that file-type, use the --query option, like:

awilliam@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': org.gnome.gedit.desktop
Registered applications:
    geany.desktop
    org.gnome.gedit.desktop
    calc.desktop
    ghex.desktop
    wine-extension-txt.desktop
    monodevelop.desktop
    writer.desktop
Recommended applications:
    geany.desktop
    org.gnome.gedit.desktop

Applications register support for document types using the XDG ".desktop" standard, and the default application is stored per-user in the file $XDG_DATA_HOME/applications/mimeapps.list. In most cases $XDG_DATA_HOME is $HOME/.local/share [this is the value, according to the spec, when the XDG_DATA_HOME environment variable is not set].

Not only can gvfs-mime query the association database it can be used, by the user, to set their default handler - simpler than attempting to discover the right object to right-click.

awilliam@@GNOMERULEZ:~> gvfs-mime --set text/x-python geany.desktop
Set geany.desktop as the default for text/x-python
awilliam@@GNOMERULEZ:~> gvfs-mime --query text/x-python
Default application for 'text/x-python': geany.desktop
Registered applications:
    geany.desktop
    org.gnome.gedit.desktop
    calc.desktop
    ghex.desktop
    wine-extension-txt.desktop
    monodevelop.desktop
    writer.desktop
Recommended applications:
    geany.desktop
    org.gnome.gedit.desktop

Python files are now, by default, handled by Geany.

by whitemice at May 19, 2015 11:12 AM

May 07, 2015

Ben Rousch's Cluster of Bleep

My Farewell to GRMakers

Many of you have seen the recent board resignations and are wondering what the heck is going on over at GR Makers. We each have our own experiences, and I will set out mine here. It is a long story, but I think you deserve to hear it, so you can draw your own conclusions. I encourage you to reply to me personally (brousch@gmail.com) or via the comments on this blog post if you’d like to provide clarifications or additions to what I have to say.

I joined GR Makers not so much to make things, but to have an excuse to hang out with the most interesting group of people I’d ever met. That group started as half a dozen open source enthusiasts gathering at weekly Linux user group meetings at coffee shops, and grew to a much larger, more diverse, and eclectic gathering of developers, inventors, designers, electronics hackers, and much more thanks to Casey DuBois’ welcoming personality, non-judgemental inclusiveness, and networking prowess. A part of what brought the group together was an unstructured openness that made everyone feel like they had a say in what we were doing. When the group grew too large to continue meeting in Casey’s garage, several regulars looked around for ways of keeping the group together and growing in other locations.

Mutually Human Software offered a physical space and monetary support to keep the group together, but we had to change how the group was run. Since MHS was providing so many resources, they would own the group. There was a large meeting to decide if this was the way we wanted to go. The opinions were divided, but in the end we had to take this deal or disband the group because we’d have nowhere to meet. Casey took a job with MHS, and over the course of two years we slowly became a real makerspace. Casey continued to make connections between GR Makers, companies who donated equipment and supplies, and the community. The Socials became bigger, and so did the space.

As we grew, communication became a problem. If you didn’t attend the weekly socials and talk to Casey in person, you had no idea what was going on. Even those of us who were regularly there had no idea about how the makerspace was being run. An opaque layer existed between the community, and those who actually owned and made decisions affecting the group. Even basic questions from paying members would go unanswered when submitted to the official communication channel. Were we making money? How many members were there? Who are the owners? Is there a board, and if so, who is on it? Who is actually making decisions and how are those decisions being reached? Are our suggestions being seen and considered by these people?

Despite these issues, several interesting initiatives and projects came out of the community and makerspace: the Exposed ArtPrize Project, GR Young Makers, The Hot Spot, and most recently Jim Winter-Troutwine’s impressive sea kayak. I enjoyed the community, and wanted to see it continue to thrive.

I thought the communication problem was problem was one of scale: there was a large community and only a few people running things. I assumed those in charge were simply overwhelmed by the work required to keep everyone informed. In an attempt to fix this problem, I volunteered to write a weekly newsletter which I hoped would act as a conduit for the leadership to inform those who were interested. I asked for a single piece of information when I started the newsletter: a list of board members and what their roles were. I did not receive this information, but went ahead anyways, thinking that it would be sorted out soon. I gathered interesting information by visiting the space and talking to the community at the Socials each week and put it into a digestible format, but still that simple piece of information was refused me. Each newsletter was approved by Samuel Bowles or Mark Van Holstyn before it was sent, sometimes resulting in a delay of days and occasionally resulting in articles being edited by them when they did not agree with what I had written.

Shortly after the first few editions of the newsletter, Casey and Mutually Human parted ways. My conversations with the people who formed that initial core of what became GR Makers revealed a much more systemic problem in the leadership than I had realized. There was indeed a board, made up of those people I talked to. They passed on concerns and advice from themselves and the members to the owners, but that’s all they were allowed to do. The board had no real power or influence, and it turns out that it had never had any. The decisions were being made by two people at MHS who held the purse strings, and even this advisory board was often kept in the dark about what was being decided.

This cauldron of problems finally boiled over and were made public at a town hall meeting on March 25, 2015. Over the course of a week, the advisory board and the owners held a series of private meetings and talked for hours to try to keep GR Makers together. Concessions and public apologies were made on both sides and an agreement was reached which seemed to satisfied nearly everyone. In short, it was promised that the leadership would give the board more powers and would become more transparent about finances, membership, and decision making. This link leads to my summary of that town hall meeting, and a nearly identical version of those notes went out in an approved edition of the newsletter.

The community was relieved that the makerspace we had worked so hard to create was not going collapse, and I assumed that the board was being empowered. Bob Orchard was added to the advisory board and kept and published minutes from the board meetings – something which had not been done previously. These minutes always mentioned requests for the changes that had been agreed upon at the Town Hall, but action on those requests was always delayed. At the board meeting on April 29, the requests were finally officially denied. The minutes from that board meeting can be found here. Most of the board members – including all of the founders of that initial group in Casey’s garage – resigned as a result of this meeting.

It is up to each of us to decide if GR Makers as it exists today meets our desires and needs. There are still good people at GR Makers, but that initial group of interesting people has left. Without them I find very little reason to continue contributing. The ownership structure of GR Makers was an educational and enlightening experiment, but it is not what I want to be a part of. I think the openness and transparency that formed the backbone of that group which became GR Makers is gone, and I don’t think it is coming back. So it is with a heavy heart that I am resigning my membership.

But do not despair. That initial group of friends – that sociable collection of connectors, hackers, inventors, and makers – and a few new faces we’ve picked up along the way, have been talking together. We want to start over with a focus on the community and ideals that existed in the gatherings at Casey’s garage. It may be a while before we have a stable space to meet and tools for people to use, but I hope you’ll join us when we’re ready to try again. If you’d like to be kept up to date on this group, please fill out this short form.

by brousch at May 07, 2015 11:16 PM

April 10, 2015

Whitemice Consulting

yum-config-manager --setopt=

It is common knowledge that on CentOS/RHEL hosts you can enable and disable defined reposities with the yum-config-manager. However it is also possible to use yum-config-manager tool to set any repository parameter using the setopt parameter.

One common site specific tweak is to exclude particular packages, by glob, from the default repositories. Perhaps you use the Samba packages from SetNET or the PostgreSQL packages from PGDG in which case you do not want to risk an update co-mingling packages. Rather than diddling in vi with every repository definition in every repo file it is possible to set the required option using yum-config-manager. Provide the configuration parameter you wish to set as the parameter value of --setopt= and indicate that you want to save the change with --save. setopt values spelling out as : reporsitoryName:parameter=value;

To exclude the glob "samba*" from all the default repositories:

yum-config-manager  --save --setopt=base.exclude=samba*;
yum-config-manager  --save --setopt=updates.exclude=samba*;
yum-config-manager  --save --setopt=extras.exclude=samba*;
yum-config-manager  --save --setopt=centosplus.exclude=samba*;
yum-config-manager  --save --setopt=contrib.exclude=samba*;"

If the host has EPEL or other repositories those will need to be disabled as well.

yum-config-manager beats a whole bunch of file edits, and it makes it possible to script deployments of hosts which require repository customization.

by whitemice at April 10, 2015 08:06 PM

April 09, 2015

Whitemice Consulting

dd REPORT!

dd is an underrated tool. What it lacks in user interface it makes up for with its swiss-army-knife number of purposes; from testing, to secure wiping, to backup and restore. But that user interface... it is bleak.

zcat bcmImage.gz | dd of=/dev/sdd bs=512

Text 1: Uncompress the image from the file bcmImage.gz and write it to the system's fourth physical volume.

So dd is running, doing something, maybe. The only way you can really tell is to look at the harddrive lights or inspect some performance counters - somebody is using those drives. But dd has more user interface than it lets on; send it the signal USR1 and it will report its status. You can send as many USR1 signals as you like, it will report every time; it does not interrupt the operation.

awilliam@linux-86wr:~> dd if=/dev/urandom of=/dev/null bs=1024

Text 2: Start a dd operation.

awilliam@linux-86wr:~> sudo killall -USR1 dd

Text 3: Send dd a USR1 signal; from another tty (obviously).

awilliam@linux-86wr:~> dd if=/dev/urandom of=/dev/null bs=1024
78883+0 records in
78882+0 records out
80775168 bytes (81 MB) copied, 6.262 s, 12.9 MB/s

Text 4: dd reports without interrupting what it is doing.

Handy.

by whitemice at April 09, 2015 12:38 PM

March 31, 2015

Whitemice Consulting

Stream Peekaboo With Python

The Python standard libary provides a reasonably adequate module for reading delimited data streams and there are modules available for reading everything from XLS and DIF documents to MARC data. One definiciency of many of these modules is the ability to gracefully deal with whack data; in the real world data is never clean, never correctly structured, and you are lucky if it is accurate even on the rare occasion that it is correctly encoded.

For example, when Python's CSV reader meets a garbled line in a file it throws an exception and stops, and you're done. And it does not report what record it could not parse, all you have is a traceback. Perhaps in the output you can look at the last record and guess that the error lies one record beyond that... maybe.

Fortunately most of these modules work with file-like objects. As long as the object they receive properly implements iteration they will work. Using this strength it is possible to implement a Peekaboo on the input stream which allows us to see what the current unit of work being currently processed is, or even to pre-mangle that chunk.

Aside: The hardest part, at least for not-line-oriented data, is defining the unit of work.

For example here is a simple Peekaboo that allows for easy reporting of the line read by the CSV reader whenever that line does not contain the expected data:

import csv

class Peekaboo(object):

    def __init__(self, handle):
        self._h = handle
        self._h.seek(0)
        self._c = None

    def __iter__(self):
        for row in iter(self._h):
            self._c = row
            yield self._c

    @property
    def current(self):
        return self._c

class RecordFormatException(Exception):
    pass

def import_record(record):
    # verify record data, check field types, field count, etc...
    if not valid:
        raise RecordFormatException()

if __name__ == '__main__':

    rfile = open('testfile.csv', 'rb')
    peekabo = Peekaboo(rfile)
    for record in csv.reader(wrapper):
        try:
            data = import_record(record)
        except RecordFormatException as exc:
            print('Format Exception Processing Record:\n{0}'.format(peekabo.current, ))

Another use for a Peekabo and CSV reader is reading a delimited file that contains comments - lines starting with a hash ("#") are to be ignored when reading the file.

class Peekaboo(object):

    def __init__(self, handle, comment_prefix=None):
        self._h = handle
        self._h.seek(0)
        self._c = None
        self._comment_prefix = comment_prefix

    def __iter__(self):
        for row in iter(self._h):
            self._c = row
            if self._comment_prefix and self._c.startswith(self._comment_prefix):
                # skip the line
                continue
            yield self._c

    @property
    def current(self):
        return self._c

if __name__ == '__main__':

    rfile = open('testfile.csv', 'rb')
    peekabo = Peekaboo(rfile, comment_prefix="#")
    ...

The Peekaboo is nothing revolutionary; to experienced developers it is likely just obvious. But I've introduced it to enough Python developers to believe it worthy of a mention.

by whitemice at March 31, 2015 02:19 PM

March 25, 2015

Whitemice Consulting

LINUX CA Certificate Deployment

Creating an in house signing [aka CA] certificate is a common practice; this allows you to generate free cerficates for internal use. For Windows hosts distributing this CA certificate to all the clients and relevant servers can be accomplished using Active Directory GPOs. Certificate management on LINUX hosts on the other hand has always been a swamp of tedium where the certificates often need to be configured into each client or service. Recent distributions have eased this process considerably by including a quasi-standardized set of scripts and certificate store locations. Both CentOS and openSUSE bundle this tool in an RPM named ca-certificates.

Assuming you are using one of these modern mainstream distribtions installing a CA is as easy as copying the PEM file to the appropriate directory and running the update-ca-trust script

[root@myserver ~]# cp myCAcert.pem /usr/share/pki/ca-trust-source/anchors/
[root@myserver ~]# update-ca-trust

Aside: You should probably take a look at man update-ca-trust, it contains a lot of details.

Once the certificate is installed and the CA trust bundle has been updated you should be able to connect to SSL/TLS enabled sites without the dreaded "Verify return code: 19 (self signed certificate in certificate chain)" message.

Legacy Application Compatibility

One issue you will likely encounter with the modern certificate store is that older applications have a variety of places they look for certificates - these paths are sometimes hard-coded into the applications or set via obscure configuration directives. Fortunately update-ca-trust acknowledges this issue - you can enable legacy compatibility.

[root@myserver ~]# update-ca-trust enable

What this actually does is create symbolic links with the appropriate names in places like /etc/ssl/certs/ where legacy applications often expect to find certificate information. These links will point to the files maintained by the scripts which are part of the current certificate scheme.

[fred@myserver ~]# ls -l /etc/ssl/certs/
total 16
lrwxrwxrwx  1 root root   49 Mar 25 08:45 ca-bundle.crt -> /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem
...

Legacy compatibility is not perfect, you will still have the odd application or service that needs to be pointed directly at the certificate files; there always seems to be something that is-yet-even-more-legacy - however most things it will just work, which we like.

Testing A Connection

Using the openssl command it is possible to TEST HTTPS, XMPP, SMTP, and IMAP encryption. This is vastly superior than attempting to test with a protocol specific client - openssl will clearly report the error or success, which many clients will not reliably do. Note that in these examples for testing we do not specify a -CAfile or -CApath, that is the point - you should be able to establish secure connections without referencing your CA certificate files.

SMTP/TLS

openssl s_client -starttls smtp -connect smtp.example.com:25  -crlf
...
    Verify return code: 0 (ok)

IMAP/TLS

openssl s_client -starttls imap -connect imap.example.com:143  -crlf
...
    Verify return code: 0 (ok)

IMAP/SSL

openssl s_client -ssl3 -connect imap.example.com:993  -crlf
...
    Verify return code: 0 (ok)

XMPP/TLS

openssl s_client -starttls xmpp -connect xmpp.example.com:5222 -crlf 
...
     Verify return code: 0 (ok)

XMPP/SSL

openssl s_client -connect xmpp.example.com:5223
...
     Verify return code: 0 (ok)

HTTP/SSL

openssl s_client -connect www.example.com:443
...
    Verify return code: 0 (ok)

Postfix

Even with legacy compatibility enabled Postfix does not pick-up on the CA certificates. Nor does it readily indicate where it is looking for them. A work around is to simply set the service to use the legacy certificate bundle - do it once and you can forget about it.

Note: This example first disables, by commenting out, the existing CAFile configuration, if any. Then it sets the value of smtp_tls_CAfile. You should always configure postfix using the postconf command, this is less error-prone than manually editing the configuration file - and it is easier to script.

[root@smtp.example.com ~]# postconf -# smtp_tls_CAfile
[root@smtp.example.com ~]# postconf -e smtp_tls_CAfile=/etc/ssl/certs/ca-bundle.crt
[root@smtp.example.com ~]# service postfix reload

Now assuming you have smtp_use_tls = yes and the destination server is correctly configured delivery of mail sent from the local host via postfix will be correctly secured and without filling your logs with "certificate verification failed for..." messages.

by whitemice at March 25, 2015 02:00 PM

March 23, 2015

Whitemice Consulting

Ghostscript Font Testing

Viewing fonts on a screen and printing fonts to a printer are two different paths. Printing on LINUX almost always involves Postscript - of which PDF is a subtype - and the primary engine for creating or modifying Postscript is the Ghostscript package. Given the display path and the print path it is important to verify that a custom or third-party font is correctly installed and is working in Ghostscript.

The tedious way to do this is to create a document that contains that font, print it, and see if it looks OK. And if it doesn't you scratch your head and wonder why, any font related warning or errors are lost in the background. A better way is to run a render test of the font using the Ghostscript CLI.

Aside: Postscript is a domain specific programming language. To create a document on paper the print runs the Postscript program. With the Ghostscript CLI we can run the Postscript program interactively and see the result on a virtual page.

A Postscript program for font rendering is included in the standard Ghostscript package. The name of the program is prfont.ps. The first step is to locate this program in the filesystem

awilliam@linux-86wr:~> rpm -ql ghostscript | grep prfont
/usr/share/ghostscript/9.15/lib/prfont.ps

Text 1: Looking in the ghostscript RPM for the prfont program.

Once you know the path to the Postscript program you can perform the font test.

  1. Start the Ghostscript CLI with the gs command. You should then be at a GS> prompt and have a virtual page window.
  2. Load all the system fonts with the loadallfonts command. You will see Ghostscript loading all the fonts. Watch the output for the font you are interested in [Ctrl-Shift-F in GNOME terminal will search the buffer]. See that it loaded, and if not look for any errors.
  3. Run the prfont program. In Ghostscript's CLI a Postscript program is run using the syntax "(path) run"
  4. Render the font you are interested by calling the DoFont routine in the program. In the CLI functions are called with parameters first and then the method name; so: "/nameoffont DoFont"
  5. If everything works the font will be rendered onto the virtual canvas and you will page through the document by pressing enter.

In practice it will look like:

awilliam@linux-86wr:~> gs
GPL Ghostscript 9.15 (2014-09-22)
Copyright (C) 2014 Artifex Software, Inc.  All rights reserved.
This software comes with NO WARRANTY: see the file PUBLIC for details.
GS>loadallfonts
Loading Utopia-BoldItalic font from /usr/share/ghostscript/fonts/putbi.pfa... 4214072 2766915 3914480 2630640 1 done.
Loading URWBookmanL-DemiBoldItal font from /usr/share/ghostscript/9.15/Resource/Font/URWBookmanL-DemiBoldItal... 4214072 2822861 3934672 2637347 1 done.
...
GS>(/usr/share/ghostscript/9.15/lib/prfont.ps) run
GS<1>/DroidSansGeorgian DoFont
>>showpage, press <return> to continue<<
>>showpage, press <return> to continue<<
>>showpage, press <return> to continue<<
GS>quit

by whitemice at March 23, 2015 03:14 PM

February 28, 2015

qwertyuiop ninja (Jamie Bliss)

A story about Jessica.

A story about Jessica.: swiftonsecurity: I want you to imagine someone for me. Her name is Jessica...

February 28, 2015 05:02 PM

February 21, 2015

qwertyuiop ninja (Jamie Bliss)

Leatherbound Pt2: Front-end Architecture and Data Schema

Ok, so last time I said there would be some designs in this post. That was a complete fabrication....

February 21, 2015 06:33 PM

January 23, 2015

qwertyuiop ninja (Jamie Bliss)

i3 Window Manager

I’ve been using the i3 Window Manager for a little while now on my laptop. For those of you...

January 23, 2015 12:12 AM

January 18, 2015

qwertyuiop ninja (Jamie Bliss)

Leatherbound Pt1: Project Requirements

I’m working on a new project which I thought I’d turn into a series. It’s nothing...

January 18, 2015 09:59 PM

January 14, 2015

Whitemice Consulting

Overrides With SSSD

LINUX has long been plagued with a rather lousy identity management scheme. Beyond the limitations of POSIX's getent and related calls [which can be very inefficient] the attempts to stub in network-aware identity services such as LDAP have only piled onto the rough edges. NSCD attempted to work around performance problems via caching - and did not do very well. Then was NSLCD the next evolution of NSCD which was better, but still inflexible. Identity management in more complex networks is a tedious business and what administrators need more than anything else is flexibility.

Finally we have SSSD - The System Security Services Daemon - which is an identity service that seems to have an answer for every basic issue - from caching, to Kerberos integration, to joining together multiple domains.... On top of all that essential goodness is the simple feature of local overrides for identity values.

As an administrator at a multiple platform enterprise this has always been a dilema - what value should get stored in certain user attributws such as loginShell? Not all systems want/need the same thing. Ugly solutions involve symbolic links and/or installing non-standard shells. None of which can be described as "elegant" or even "correct". Our AIX systems want to use /bin/ksh, our LINUX systems want to use /bin/bash [or even /usr/bin/bash]. Ultimately I end up storing the attribute in user object for the most limited inflexible systems - generally that means AIX gets its way.

This means when I attempt to sign into an out-of-the-box LINUX installation it fails.

Jan 14 08:22:21 example sshd[24292]: User adam not allowed because shell /bin/ksh does not exist
Jan 14 08:22:21 example sshd[24293]: input_userauth_request: invalid user adam
Jan 14 08:22:29 example sshd[24292]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=192.168.1.257  user=adam

Bummer. But now that systems are using SSSD to back the NSS service the solution is as simple as:

[nss]
override_shell = /bin/bash

And now my shell, and the shell for every user, on that host is "/bin/bash".

Aside: How often would users on the same host actually use different shells? That sounds like an administrator’s nightmare. At least on modern systems the loginShell seems much more appropriate as a host property than a user property; the most shell the majority of users ever experience is the login script that starts their application or menu.

SSSD provides overrides for shell, home directory [override_homedir] and primary group [override_gid: make the primary group of all users from the network identity service the same].

Author: Adam Tauno Williams

by whitemice at January 14, 2015 05:00 AM

December 15, 2014

qwertyuiop ninja (Jamie Bliss)

December 07, 2014

qwertyuiop ninja (Jamie Bliss)

Change in Job

I thought I would formally announce that I have changed companies. I’m now in a position of...

December 07, 2014 01:08 AM

December 02, 2014

Whitemice Consulting

Some Random xsltproc Tips

The xsltproc command allows the processing of an XML document with an XSLT template.

xsltproc rentalouts.xslt rentalouts.xml

Text 1: Perform the transform "rentalouts.xslt" on the document "rentalouts.xml".

A lesser known feature of xsltproc is the ability to pass parameters - these become XSLT variables - to the transformation.

xsltproc  --stringparam sales_id 15 DailyRentalOutNotice.xslt -

Text 2: Perform the "DailyRentalOutNotice.xslt" transform on the XML document read from standard input with a variable $sales_id having a value of "15".

Addition useful directives are --repeat (which runs the transform 20 times) and --timing (which outputs the run-time information of the transform). These directives allow for much easier and more accurate performance testing/tuning of a transformation than running xsltproc under time, for example.

If you've ever been curious what XSLT extensions your xsltproc supports [this may vary by build] you can dump the known extension using the --dumpextensions parameter.

$> xsltproc --dumpextensions
Registered XSLT Extensions
--------------------------
Registered Extension Functions:
{http://exslt.org/crypto}md5
{http://exslt.org/math}random
{http://exslt.org/math}abs
...

Text 3: xsltproc reporting its supported extensions.

author: Adam Tauno Williams

by whitemice at December 02, 2014 03:31 PM

November 21, 2014

Whitemice Consulting

Paramiko's SFTPFile.truncate()

Paramiko is the go-to module for utilizing SSH/SFTP in Python. One one the best features of Paramiko is being able to being able to SFTPClient.open() a remote file and simply use it like you would use a local file. SFTPClient's open() returns an SFTPFile which is a file-like object that implements theoretically the same behavior as Python's native file object.

But the catch here is file-like. It is like a file, except when it is not like a file.

The non-likeness I encountered concern's SFTPFile's truncate method. The native Python file object has a truncate method where the size is optional, if not specified it will default to the current offset of the file. However SFTPClient's truncate method requires the size to be specified. Failure to specify an size value results in a TypeError exception concerning incorrect number of parameters.

by whitemice at November 21, 2014 02:49 PM

November 11, 2014

qwertyuiop ninja (Jamie Bliss)

Help the GNOME Foundation defend the GNOME trademark

Help the GNOME Foundation defend the GNOME trademark: I know I haven’t been posting much, but...

November 11, 2014 03:03 PM

November 03, 2014

qwertyuiop ninja (Jamie Bliss)

Patent Searches

Allow me to summarize my experience with patent searches: Try to come up with as many ways to...

November 03, 2014 08:07 PM

October 31, 2014

qwertyuiop ninja (Jamie Bliss)

How To Keep Your Best Programmers | DaedTech

How To Keep Your Best Programmers | DaedTech: To future employers: If you like me and want to keep...

October 31, 2014 05:55 PM

October 28, 2014

qwertyuiop ninja (Jamie Bliss)

The Couch Cone of Silence

The Couch Cone of Silence: About five years ago, I bought a cushy couch for my office. (Okay, yes, I...

October 28, 2014 04:03 PM

October 21, 2014

qwertyuiop ninja (Jamie Bliss)

How to make a living photo?

I have an idea for how to process the raw image: Use the calibration information in each photo to...

October 21, 2014 04:03 PM

October 20, 2014

qwertyuiop ninja (Jamie Bliss)

Metaclasses Talk Tonight

Tonight, I shall be giving a talk at the local Python Users Group on metaclasses and related. If...

October 20, 2014 09:00 PM

October 18, 2014

qwertyuiop ninja (Jamie Bliss)

Extracting the "bug-eye" image from a Lytro file

So I have the raw database from the Mac version of the Lytro software. Each “image” is a...

October 18, 2014 05:31 PM

October 15, 2014

qwertyuiop ninja (Jamie Bliss)

Lytro Software

I’d like to write my own photo manager for my Lytro camera. My basic problem is Linux...

October 15, 2014 02:58 AM