Never put important data on anyone else’s hardware. Ever.

Friday, January 22, 2021 

In early January, 2021, two internet services provided unintentional and unequivocal demonstrations of the intrinsic trade-offs between running one’s own hardware and trusting “The Cloud.”  Parler and Gab, two “social network” services competing for the white supremacist demographic both came under fire in the wake of a violent insurrection against the US government when the plotters used their platforms (among other less explicitly extremist-friendly services) to organize the attack.

Parler had elected to take the expeditious route of deploying their service on AWS and discovered just how literally the cloud is metaphorically like atmospheric clouds—public and ephemeral—when first their entire data set was extracted and then their services were unilaterally terminated by AWS knocking them completely offline (except, of course, for the exfiltrated data, which is still online and being combed by law enforcement for evidence of sedition.)

Gab owns their own servers and while they had trouble with their domain registrar, such problems are relatively easy to resolve: Gab remains online.  Gab did face the challenge of rapid scaling as the entire right-wing extremist market searched for a safe haven away from the fragile Parler and from the timid and begrudging regulation of hate speech and calls for immediate violence by mainstream social networks in the fallout over their contributions to the insurrection and other acts of right-wing terrorism.

In general customers who engage cloud service providers rather than self-hosting do so to speed deployment, take advantage of easy scalability (up or down), and offload management of common denominator infrastructure to a large-scale provider, all superficially compelling arguments.  However convenient this may seem, it is rarely a good decision and fails to rationally consider some of the intrinsic shortcomings, as Parler discovered in rather dramatic fashion, including loss of legal ownership of the data on those services, complete abdication of control of that data and service, and an intrinsic and inescapable misalignment of business interests between supplier and customer.

Anyone considering engaging a cloud service provider for a service that results in proprietary data being stored on third party hardware or on the provision of a business critical service by a third party should ensure contractual obligations with well defined penalties explicitly match the implicit expectations of privacy, stewardship, suitability of service, and continuity and failures made actionable sufficient to make whole the client in the event of material breach.

Below is a list of questions I would have for any cloud provider of any critical service.  In general, if a provider is willing to even consider answering the results will be shockingly unsatisfactory.  Every company that uses a cloud service, whether it is hosting on AWS or email provisioning by Google or Microsoft is a Parler waiting to happen: all of your data exposed and then your business terminated.  Cloud services are acceptable only for insecure data and for services that are a convenience, not a core requirement.

Like clouds in the sky, The Cloud is public and ephemeral.


A: A first consideration is data protection and privacy:

What liability does The Company, and employees of The Company individually, have should they sell or lose control of The Customer’s data?   What compensation will The Customer receive if control of The Customer’s data is lost?  Please clarify The Company’s criminal and civil liability under the following scenarios:

1) A third party exfiltrates The Customer’s data entrusted to The Company’s care in an unauthorized manner.

2) An employee of The Company willfully misuses The Customer’s data entrusted to The Company in any way.

3) The Company disposes of equipment in a manner which makes The Customer’s data entrusted to The Company accessible to third parties.

4) The company receives a National Security Letter (NSL) requesting information pertaining to The Customer or to others who have data about The Customer on The Company’s service.

5) The company receives a warrant requesting information pertaining to The Customer or  to others who have data regarding The Customer on The Company’s service.

6) The company receives a subpoena requesting information pertaining to The Customer or to others who have data regarding The Customer on The Company’s service that is opened or has been in stored on their hardware for more than 180 days.

7) The company receives a civil discovery request for information pertaining to The Customer or to others who have data regarding The Customer on The Company’s service.

8) The company sells or provides access to The Customer’s data or meta information about The Customer or The Customer’s use of The Company’s system to a third party.

9) The Company changes their terms of service at some future date in a way that is inconsistent with the terms agreed to at the time of The Customer’s engagement of the services of The Company.

10) The Company fails to inform The Customer of a breach of control of The Customer’s data.

11) The Company fails to inform The Customer in a timely manner of a change in policy regarding third party access to The Customer’s data.

12) The Company erroneously exposes The Customer’s data to third party access due to negligence or incompetence.

B: A second consideration is a serial dependency on the reliability of The Company’s service to The Customer’s activity:

By relying on The Company’s service, The Customer typically will rely on the performance and availability of The Company’s products.  If The Company product fails or fails to provide service as expected, The Customer may incur losses, including direct financial losses, loss of reputation, loss of convenience, or other harms.  What warranty does The Company make in the performance of their services?  What recourse does The Customer have for recovery of losses should The Company fail to perform?

Please provide details on what compensation The Company will provide in the following scenarios:

1) The Company can no longer perform the agreed and expected services due to reasons beyond The Company’s control.

2) The Company’s service fails to meet expectations in way that causes a material loss to The Customer.

3) The Company suffers an extended outage or compromise of service that exceeds a reasonable or agreed maximum accepted duration.

C: A third consideration is the alignment of interests between The Customer and The Company which may not be complete and may diverge in the future:

Engagement of the services of The Company requires an investment of time and resources on the part of The Customer in excess of any fees The Company may charge to adopt The Company’s products and services.  What compensation will be provided should The Company’s products fail to meet  performance and utility expectations?  What compensation will be provided should expenditure of resources be required to compensate for The Company’s failure to meet service expectations?

Please provide details on what compensation The Company will provide in the following scenarios:

1) The Company elects to no longer perform the agreed and expected services due to business decisions made by The Company.

2) Ownership or control of The Company changes to an entity that is not aligned with the values of The Customer and which The Customer can not support, directly or indirectly.

3) Control of The Company passes to a third party e.g. through an acquisition or change of control of the board and which results in use of The Customer’s data in a way that is unacceptable to The Customer.

4) The Company or employees of The Company are found to have engaged in behavior, speech, or conduct which is unacceptable to The Customer.

5) The Company’s products or services are found to be unacceptable to The Customer for any reason not limited to security flaws, missing features, access failures, lack of performance, etc and The Company is not able to or is unwilling to meet The Customer’s requirements in a timely manner.

If your company depends on third party provisioning of IT services, you’re just one viral tweet away from being out of business.  Build an IT department that knows how to use a command line and run your critical services on your own hardware.

Posted at 16:01:48 GMT-0700

Category: FreeBSDLinuxSecurity

EZ rsync cheat sheet

Wednesday, December 30, 2020 

Rsync is a great tool – incredibly powerful for synchronizing directories, copying over a network or over SSH, an awesome way to backup a mobile device back to a core network securely and other great functions.  it works better than just about anything else developed before or since, but is a command line UI that is easy to forget if you don’t use it for a while and Windows is a challenge.

This isn’t meant to be a comprehensive guide, they’re are lots of those, but a quick summary of what I find useful.

There’s one confusing thing that I have to check often to be sure it is going to do what I think it should – the trailing slash on the source.  It works like this:

A quick summary of useful command options (there are many, many) is:

-v, --verbose               increase verbosity
-r, --recursive             recursive (go into subdirectories)
-c, --checksum              skip based on checksum, not mod-time & size (slow, but accurate)
-a, --archive               archive mode; equals -rlptgoD (no -H,-A,-X)
-z, --compress              compress file data during the transfer, should help over slow links
-n, --dry-run               trial run, don't move anything
-h, --human-readable        display the output numbers in a human-readable format
-u, --update                don’t copy the files from source to destination if destination files are newer
    --progress              show the sync progress during transfer
    --exclude ".*"          exclude files starting with "."
    --remove-source-files   after synced, empty the dir (like mv/merge)
    --delete                any files in dest that aren't in source are deleted in destination (danger)

I do not recommend using compression (-z) on a LAN, it’ll probably slow you down. Over a slower (typically) WAN link it helps, but YMMV depending on link and CPU speed.

If the files really have to be accurately transferred, the checksum (-c) option is critical – every copy function should include this.

Posted at 11:53:28 GMT-0700

Category: FreeBSDLinuxPositivereviews

Favicon generation script

Monday, December 21, 2020 

Favicons are a useful (and fun) part of the browsing experience.  They once were simple – just an .ico file of the right size in the root directory.  Then things got weird and computing stopped assuming an approximate standard ppi for displays, starting with mobile and “retina” displays.  The obvious answer would be .svg favicons, but, wouldn’t’ya know, Apple doesn’t support them (neither does Firefox mobile) so for a few more iterations, it still makes sense to generate an array of sizes with code to select the right one.  This little tool pretty much automates that from a starting .svg file.

There are plenty of good favicon scripts and tools on the interwebs. I was playing around with .svg sources for favicons and found it a bit tedious to generate the sizes considered important for current (2020-ish) browsing happiness. I found a good start at stackexchnage by @gary, though the sizes weren’t current recommended (per this github project). Your needs may vary, but it is easy enough to edit.

The script relies on the following wonderful FOSS tools:

These are available in most distros (software manager had them in Mint 19).

Note that my version leaves the format as .png – the optimized png will be many times smaller than the .ico format and png works for everything except IE<11, which nobody should be using anyway.  The favicon.ico generated is 16, 32, and 48 pixels in 3 different layers from the 512×512 pixel version.

The code below can be saved as a bash file, set execution bit, and call as ./favicon file.svg and off you go:

#!/bin/bash

# this makes the output verbose
set -ex

# collect the file name you entered on the command line (file.svg)
svg=$1

# set the sizes to be generated (plus 310x150 for msft)
size=(16 32 70 128 150 152 167 180 192 310 512) 

# set the write director as a favicon directory below current
out="$(pwd)"
out+="/favicon"
mkdir -p $out

echo Making bitmaps from your svg...

for i in ${size[@]}; do
inkscape $svg --export-png="$out/favicon-$i.png" -w$i -h$i --without-gui
done

# Microsoft wide icon (annoying, probably going away)
inkscape $svg --export-png="$out/favicon-310x150.png" -w310 -h150 --without-gui

echo Compressing...

for f in $out/*.png; do pngquant -f --ext .png "$f" --posterize 4 --speed 1 ; done;

echo Creating favicon

convert $out/favicon-512.png -define icon:auto-resize=48,32,16 $out/favicon.ico

echo Done

Copy the .png files generated above as well as the original .svg file into your root directory (or, if in a sub-directory, add the path below), editing the “color” of the Safari pinned tab mask icon. You might also want to make a monochrome version of the .svg file and reference that as the “mask-icon” instead, it will probably look better, but that’s more work.

The following goes inside the head directives in your index.html to load the correct sizes as needed (delete the lines for Microsoft’s browserconfig.xml file and/or Android’s manifest file if not needed.)

<!-- basic svg -->
<link rel="icon" type="image/svg+xml" href="/favicon.svg">

<!-- generics -->
<link rel="icon" href="favicon-16.png" sizes="16x16">
<link rel="icon" href="favicon-32.png" sizes="32x32">
<link rel="icon" href="favicon-128.png" sizes="128x128">
<link rel="icon" href="favicon-192.png" sizes="192x192">

<!-- Android -->
<link rel="shortcut icon" href="favicon-192.png" sizes="192x192">
<link rel="manifest" href="manifest.json" />

<!-- iOS -->
<link rel="apple-touch-icon" href="favicon-152.png" sizes="152x152">
<link rel="apple-touch-icon" href="favicon-167.png" sizes="167x167">
<link rel="apple-touch-icon" href="favicon-180.png" sizes="180x180">
<link rel="mask-icon" href="/favicon.svg" color="brown">

<!-- Windows --> 
<meta name="msapplication-config" content="/browserconfig.xml" />

For WordPress integration, you don’t have access to a standard index.html file, and there are crazy redirects happening, so you need to append to your theme’s functions.php file with the below code snippet wrapped around the above icon declaration block (optimally your child theme unless you’re a theme developer since it’ll get overwritten on update otherwise):

/* Allows browsers to find favicons */
add_action('wp_head', 'add_favicon');
function add_favicon(){
?>
REPLACE THIS LINE WITH THE BLOCK ABOVE
<?php
};

Then, just for Windows 8 & 10, there’s an xml file to add to your directory (root by default in this example) Also note you need to select a color for your site, which has to be named “browserconfig.xml

<?xml version="1.0" encoding="utf-8"?>
<browserconfig>
    <msapplication>
        <tile>
        <square70x70logo src="/favicon-70.png"/>
        <square150x150logo src="/favicon-150.png"/>
        <wide310x150logo src="/favicon-310x150.png"/>
        <square310x310logo src="/favicon-310.png"/>
        <TileColor>#ff8d22</TileColor>
    </tile>
</msapplication>
</browserconfig>

There’s one more file that’s helpful for mobile compatibility, the android save to desktop file, “manifest.json“.  This requires editing and can’t be pure copy pasta.  Fill in the blanks and select your colors

{
"name": "",
"short_name": "",
"description": "",
"start_url": "/?homescreen=1",
"icons": [
{
"src": "/favicon-192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "/favicon-512.png",
"sizes": "512x512",
"type": "image/png"
}
],
"theme_color": "#ffffff",
"background_color": "#ff8d22",
"display": "standalone"
}

Check the icons with this favicon tester (or any other).

Manifest validation: https://manifest-validator.appspot.com/

Posted at 17:26:44 GMT-0700

Category: HowToLinuxself-publishing

CoreELEC/Kodi on a Cheap S905x Box

Friday, September 11, 2020 

I got a cheap Android TV box off Amazon a while back and consolidated and it ended up being spare.  One of the problems with these TV boxes is they don’t get OS updates after release and soon video apps stop working because of annoying DRM mods.  I figured I’d try to switch one to Linux and see how it went.  There are some complications in that these are ARM devices (not x86/x64) and getting ADB to connect is slightly complicated due to it not being a USB device.  There are a few variations of the “ELEC” (Embedded Linux Entertainment Center), OpenELEC, LibreELEC, and CoreELEC (at least).  CoreELEC focuses on ARM boxes and makes sense for the cheapest boxes.

What you get is a little linux-based device that boots to Kodi, runs an SMB server, and provides SSH access, basically an open source SmartTV that doesn’t spy on you.  It is well suited to a media library you own, rather than rent, such as your DVD/Blu-Ray collection.  Perhaps physical media will come back into vogue as the rent-to-watch streaming world fractures into a million individual subscriptions.  Once moved to the FOSS world, there are regular updates and, generally, people committed to keeping things working and far better and far longer term support than OEM’s offer for random cheap android devices (or, even quite expensive ones).

A: Download the CoreELEC image:

You need to know the processor and RAM of your device – note that a lot of different brands are the same physical boxes.  My “CoCoq M9C Pro” (not listed) is also sold as a “Bqeel M9C Pro” (listed) and the SOC (S905X) and RAM (1G) match, so the Bqeel image worked fine.  The download helper at the bottom of the https://coreelec.org/ site got me the right file easily.

B: Image a uSD or USB device for your wee box:

I’m running Linux as my desktop so I used the LibreELEC Creator Tool and just selected the file downloaded previously.  That part went easily.  The only issue was that after imaging, Linux didn’t recognize the device and I had to systemctl --user restart gvfs-udisks2-volume-monitor before I could do the next step.

C: One manual file move for bootability:

ELEC systems use a “device tree” – a file with hardware descriptors that are device specific to get (most) of the random cheap peripherals working on these cheap boxes.  You have to find the right one in the /Device Trees folder on your newly formatted uSD/USB device and copy it to the root directory and rename it dtb.img.

D: Awesome, now what?

Once you have your configured formatted bootable uSD/USB you have to get your box to boot off it.  This is a bit harder without power/volume control buttons normally on Android devices.  I have ADB set up to strip bloatware from phones.  You just need one command to get this to work, reboot update, but to get it to your box is a bit of a trick.

Most of these boxes are supplied rooted, so you don’t need to do that, but you do need a command line tool and to enable ADB.

To enable ADB: navigate to the build number (usually Settings > System > About phone > Build number) and click it a bunch of times until it says “you’re a developer!”  Then go to developer options and enable ADB.  You can’t use it yet though because there’s no USB.

Install a Terminal Emulator for Android and enable ADB over TCP

su
setprop service.adb.tcp.port 5555
stop adbd
start adbd

Then check your IP with ifconfig and on your desktop computer run

adb connect dev.ip.add.ress:5555

(dev.ip.add.ress is the IP address of your device – like 192.168.100.46, but yours will be different)

Then execute adb reboot update from the desktop computer and the box should reboot then come up in CoreELEC.

Going back to android is as easy as disconnecting the boot media and rebooting, the android OS is still on the device.

Issue: WIFI support.

Many of these cheap devices use the cheapest WIFI chips they can and often the MFGs won’t share details with FOSS developers, so driver support is weak.  Bummer, but the boxes have wired connections and wired works infinitely better anyway: it isn’t a phone, it’s attached to a TV that’s not moving around, run the damn wire and get a stable, reliable connection.  If that’s not possible check the WIFI chips before buying or get a decent USB-WIFI adapter that is supported.

 

Posted at 16:46:48 GMT-0700

Category: HowToLinuxtechnology

WebP and SVG

Tuesday, September 1, 2020 

Using WebP coded images inside SVG containers works.  I haven’t found any automatic way to do it, but it is easy enough manually and results in very efficiently coded images that work well on the internets.  The manual process is to Base64 encode the WebP image and then open the .svg file in a text editor and replace the

xlink:href="data:image/png;base64, ..."

with

xlink:href="data:image/webp;base64,..."

(“…” means the appropriate data, obviously).


Back in about 2010 Google released the spec for WebP, an image compression format that provides a roughly 2-4x coding efficiency over the venerable JPEG (vintage 1974), derived from the VP8 CODEC they bought from ON2. VP8 is a contemporary of and technical equivalent to H.264 and was developed during a rush of innovation to replace the aging MPEG-II standard that included Theora and Dirac. Both VP8 and H.264 CODECs are encumbered by patents, but Google granted an irrevocable license to all patents, making it “open,” while H.264s patents compel licensing from MPEG-LA.  One would think this would tend to make VP8 (and the WEBM container) a global standard, but Apple refused to give Google the win and there’s still no native support in Apple products.

A small aside on video and still coding techniques.

All modern “lossy” (throwing some data away like .mp3, as opposed to “lossless” meaning the original can be reconstructed exactly, as in .flac) CODECs are founded on either Discrete Cosine Transform (DCT) or Wavelet (DWT) encoding of “blocks” of image data.  There are far more detailed write ups online that explain the process in detail, but the basic concept is to divide an image into small tiles of data then apply a mathematical function that converts that data into a form which sorts the information from least human-perceptible to most human-perceptible and sets some threshold for throwing away the least important data while leaving the bits that are most important to human perception.

Wavelets are promising, but never really took off, as in JPEG2000 and Dirac (which was developed by the BBC).  It is a fairly safe bet that any video or still image you see is DCT coded thanks to Nasir Ahmed, T. Natarajan and K. R. Rao.  The differences between 1993’s MPEG-1 and 2013’s H.265 are largely around how the data that is perceptually important is encoded in each still (intra-frame coding) and some very important innovations in inter-frame coding that aren’t relevant to still images.

It is the application of these clever intra-frame perceptual data compression techniques that is most relevant to the coding efficiency difference between JPEG and WebP.

Back to the good stuff…

Way back in 2010 Google experimented with the VP8 intra-coding techniques to develop WebP, a still image CODEC that had to have two core features:

  • better coding efficiency than JPEG,
  • ability to handle transparency like .png or .tiff.

This could be the one standard image coding technique to rule them all – from icons to gigapixel images, all the necessary features and many times better coding efficiency than the rest.  Who wouldn’t love that?

Apple.

Of course it was Apple.  Can’t let Google have the win.  But, finally, with Safari 14 (June 22, 2020 – a decade late!) iOS users can finally see WebP images and websites don’t need crazy auto-detect 1974 tech substitution tricks.  Good job Apple!

It may not be a coincidence that Apple has just released their own still image format based on the intra-frame coding magic of H.265, .heif and maybe they thought it might be a good idea to suddenly pretend to be an open player rather than a walled-garden-screw-you lest iOS insta-users wall themselves off from the 90% of the world that isn’t willing to pay double to pose with a fashionable icon in their hands.  Not surprisingly, .heic, based on H.265 developments is meaningfully more efficient than WebP based on VP8/H.264 era techniques, but as it took WebP 10 years to become a usable web standard, I wouldn’t count on .heic  having universal support soon.

Oh well.  In the mean time, VP8 gave way to VP9 then to VP10, which has now AV1, arguably a generation ahead of HEVC/H.265.  There’s no hardware decode (yet, as of end of 2020) but all the big players are behind it, so I expect 2021 devices will and GPU decode will come in 2021. By then, expect VVC (H.266) to be replacing HEVC (H.265) with a ~35% coding efficiency improvement. 

Along with AV1’s intra/inter-frame coding advance, the intra-frame techniques are useful for a still format called AVIF, basically AVIF is to AV1 (“VP11”) what WEBP is to VP8 and HEIF is to HEVC. So far (Dec 2020) only Chrome and Opera support AVIF images.

Then, of course, there’s JPEG XL on the way.  For now, the most broadly supported post-JPEG image codec is WEBP.

SVG support in browsers is a much older thing – Apple embraced it early (SVG was not developed by Google so….) and basically everything but IE has full support (IE…  the tool you use to download a real browser).  So if we have SVG and WebP, why not both together?

Oddly I can’t find support for this in any of the tools I use, but as noted at the open, it is pretty easy.  The workflow I use is to:

  • generate a graphic in GIMP or Photoshop or whatever and save as .png or .jpg as appropriate to the image content with little compression (high image quality)
  • Combine that with graphics in Inkscape.
  • If the graphics include type, convert the type to SVG paths to avoid font availability problems or having to download a font file before rendering the text or having it render randomly.
  • Save the file (as .svg, the native format of Inkscape)
  • Convert the image file to WebP with a reasonable tool like Nomacs or Ifranview.
  • Base64 encode the image file, either with base64 # base64 infile.webp > outfile.webp.b64 or with this convenient site
  • If you use the command line option the prefix to the data is “data:image/webp;base64,
  • Replace the … on the appropriate xlink:href="...." with the new data using a text editor like Atom.
  • Drop the file on a browser page to see if it works.

WordPress blocks .svg uploads without a plugin, so you need one

The picture is 101.9kB and tolerates zoom quite well. (give it a try, click and zoom on the image).

 

Posted at 08:54:16 GMT-0700

Category: HowToLinuxphotoself-publishingtechnology

Dealing with Apple Branded HEIF .HEIC files on Linux

Saturday, August 22, 2020 

Some of the coding tricks in H.265 have been incorporated into MPEG-H coding, an ISO standard introduced in 2017, which yields a roughly 2:1 coding efficiency gain over the venerable JPEG, which was introduced in 1992.  Remember that?  I do; I’m old.  I remember having a hardware NUBUS JPEG decoder card.   One of the reasons JPEG has lasted so long is that images have become a small storage burden (compared to 4k video, say) and that changing format standards is extremely annoying to everyone.

Apple has elected to make every rational person’s life difficult and put a little barbed wire around their high-fashion walled garden and do something a little special with their brand of a HEVC (h.265) profile for images.  Now normally seeing iOS user’s insta images of how fashionable they are isn’t really worth the effort, but now and then a useful correspondent joins the cult and forks over a ton of money to show off a logo and starts sending you stuff in their special proprietary format.  Annoying, but fixable.

Assuming you’re using an OS that is neither primarily spyware nor fashion forward, such as Linux Mint, you can install HEIF decode (including Apple Brand HEIC) with a few simple commands:

$ sudo add-apt-repository ppa:jakar/qt-heif
$ sudo apt update
$ sudo apt install qt-heif-image-plugin

Once installed, various image viewers should be able to decode the images.  I rather like nomacs as a fairly tolerable replacement for Irfan Skiljan‘s still awesome irfanview.

Posted at 03:56:36 GMT-0700

Category: HowToLinuxphotoPositivereviewstechnology

Integrate Fail2Ban with pfSense

Monday, July 13, 2020 

Fail2Ban is a very nice little log monitoring tool that is used to detect cracking attempts on servers and to extract the malicious IPs and do the things to them–usually temporarily adding the IP address of the source of badness to the server’s firewall “drop” list so that IP’s bad packets are lost in the aether.   This is great, but it’d be cool to, instead of running a firewall on every server each locally detecting and blocking malicious actors, to instead detect across all services and servers on the LAN and push the results up to a central firewall so the bad IPs can’t reach the network at all. This is one method to achieve that goal.

I like pfSense as a firewall and run FreeBSD on my servers; I couldn’t find a prebuilt tool to integrate F2B with pfSense, but it wasn’t hard to hack something together so it worked. Basically I have F2B maintain a local “block list” of bad IPs as a simple text file which is published via Apache from where pfSense’s grabs it and applies it as a LAN-wide IP filter.  I use the pfSense package pfBlockerNG to set up the tables but in the end a custom script running on the pfSense server actually grabs the file and updates the pfSense block lists from it on a 1 minute cron job.

There are plenty of well-written guides for getting F2B working and how to configure it for jails; I found the following useful:

The custom bits I did to get it to work are:

Custom F2B Action

On the protected side, I modified the “dummy.conf” script to maintain a list of bad IPs in an Apache served location that pfSense could reach.  F2B manages that list, putting bad IPs in “jail” and letting them out as in any normal F2B installation–but instead of being the local server’s packet filter, it is a web-published text list.

# Fail2Ban configuration file
#
# Author: David Gessel
# Based on: dummy.conf by Cyril Jaquier
#

[Definition]

# Option:  actionstart
# Notes.:  command executed on demand at the first ban (or at the start of Fail2Ban if actionstart_on_demand is set to false).
# Values:  CMD
#


actionstart = if [ -z '' ]; then 
                  touch 
                  printf %%b "# \n" 
                  fi 
              chmod 755 
              echo "%(debug)s started"

# Option:  actionflush
# Notes.:  command executed once to flush (clear) all IPS, by shutdown (resp. by stop of the jail or this action)
# Values:  CMD
#

actionflush = if [ ! -z '' ]; then
                  rm -f 
                  touch 
                  printf %%b "# \n" 
                  fi
              chmod 755 
              echo "%(debug)s clear all"

# Option:  actionstop
# Notes.:  command executed at the stop of jail (or at the end of Fail2Ban)
# Values:  CMD
#
actionstop = if [ ! -z '' ]; then
                  rm -f 
                  touch 
                  printf %%b "# \n" 
                  fi
             chmod 755 
             echo "%(debug)s stopped"

# Option:  actioncheck
# Notes.:  command executed once before each actionban command
# Values:  CMD
#
actioncheck =

# Option:  actionban
# Notes.:  command executed when banning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

actionban = printf %%b "\n" 
            sed -i '' '/^$/d' 
            sort -u  -o 
            chmod 755 
            echo "%(debug)s banned  (family: )"

# Option:  actionunban
# Notes.:  command executed when unbanning an IP. Take care that the
#          command is executed with Fail2Ban user rights.
# Tags:    See jail.conf(5) man page
# Values:  CMD
#

# flush the IP using grep which is supposed to be about 15x faster than sed  
# grep -v "pattern" filename > filename2; mv filename2 filename


actionunban = grep -v ""  > 
              mv  
              chmod 755 
              echo "%(debug)s unbanned  (family: )"


debug = []   --

[Init]

init = BRT-DNSBL

target = /usr/jails/claudel/usr/local/www/data-dist/brt/dnsbl/brtdnsbl.txt
temp = .tmp
to_target = >> 

Once this list is working, then move to the pfSense side.

Set up pfBlockerNG

The basic setup of pfBlockerNG is well described, for example in https://protectli.com/kb/how-to-setup-pfblockerng/ and it provides a lot of useful blocking options, particularly with externally maintained lists of internationally recognized bad actors.  There are two basic functions, related but different:

DNSBL

Domain Name Service Block Lists are lists of domains associated with unwanted activity and blocking them at the DNS server level (via Unbound) makes it hard for application level services to reach them.  A great use of DNSBLs is to block all of Microsoft’s telemetry sites, which makes it much harder for Microsoft to steal all your files and data (which they do by default on every “free” Windows 10 install, including actually copying your personal files to their servers without telling you!  Seriously.  That’s pretty much the definition of spyware.)

It also works for non-corporate-sponsored spyware, for example lists of command and control servers found for botnets or ransomware servers.  This can help prevent such attacks by denying trojans and viruses access to their instruction servers.  It can also easily help identify infected computers on the LAN as any blocked requests are logged (to 1.1.1.1 at the moment, which is an unfortunate choice given that is now a well-reputed DNS server like Google’s 8.8.8.8 but, it seems, without all the corporate spying.)  There is a bit of irony in blocking lists of telemetry gathering IPs using lists that are built using telemetry.

Basically DNSBLs prevent services on the LAN from reaching nasty destinations on the internet by returning any DNS request to look up a malicious domain name with a dead-end IP address.  When your windows machine wants to report your web browsing habits to microsoft, it instead gets a “page not found” error.

IPBL

This integration concept uses an IPBL, a list of IP addresses to block.  An IPBL works at a lower level than a DNSBL and typically is set up to block traffic in both directions–a script kiddie trying to brute force a password can be blocked from reach the services on the LAN, but so too can the reverse direction be blocked–if a malicious entity trips F2B, not only are they blocked from trying to reach in, so too are any sneaky services on your LAN blocked from reaching out to them on the internet.

All we need to do is get the block list F2B is maintaining into pfSense.  pfBlockerNG can subscribe to the list easily enough, but the minimum update time is an hour, which is an awfully long time to let someone try to guess passwords or flood your servers with 404 requests or whatever else you’re using F2B to detect and stop.  So I wrote a simple script that executes a few simple commands to grab the IP list F2B maintains, clean it, and use it to update the packet filter drop lists:

/root/custom/brtblock.sh

#!/usr/bin/env sh
# set -x # uncomment for "debug"

# Get latest block list
/usr/local/bin/curl -m 15 -s https://server.ip/brtdnsbl.txt > /var/db/pfblockerng/original/BRTDNSBL.orig
# filter for at least semi-valid IPs.
/usr/bin/grep  -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' /var/db/pfblockerng/original/BRTDNSBL.orig > /var/db/pfblockerng/native/BRTDNSBL.txt
# update pf tables
/sbin/pfctl -t pfB_BRTblock -T replace -f /var/db/pfblockerng/native/BRTDNSBL.txt > /dev/null 2>&1

HT to Jared Davenport for helping to debug the weird /env issues that arise when trying to call these commands directly from cron.

Preventing Self-Lockouts

One of the behaviors of pfBlockerNG that the dev seems to think is a feature is automatic filter order management.  This overrides manually sorted filter orders and puts pfB’s block filters ahead of all other filters, including, say, allow filters of your own IPs that you don’t want to ever be locked out in case you forget your passwords and accidentally trigger F2B on yourself.  To fix this, you have to use a non-default setting and make all IP block list “action” types “Alias_Native.”

pfBlockerNG Native IP Block Lists

To use Alias_Native lists, you write your own per-alias filter (typically “drop” or “reject”) and then pfBlockerNG won’t auto-order them for you on update.

pfSense Filter Order

Cron Plugin

The last ingredient is to update the list on pfSense quickly.  pfSense is designed to be pretty easy to maintain so it overwrites most of the file structure on upgrade, making command line modifications frustratingly transient.  I understand that /root isn’t flushed on an upgrade so the above script should persist inside the /root directory.  But crontab -e modifications just don’t stick around.  To have cron modifications persist, install the “Cron” package with the pfSense package manager.  Then just set up a cron job to run the script above to keep the block list updated.  “*/1” means run the script once a minute.

pfSense Cron Config

Results

The system seems to be working well enough; the list of miscreants as small, but effectively targeted: 11,840 packets dropped from an average of about 8-10 bad IPs at any given time.

pfBlockerNG current status

Posted at 05:48:43 GMT-0700

Category: FreeBSDHowToSecuritytechnology

Save your email! Avoid the Thunderbird 68 update

Thursday, November 28, 2019 

TLDR:

If you’ve customized TB with plugins you care about, DO NOT UPDATE to 68 until you verify that every plugin you use is compatible.  TB will NOT check for you and once you launch 68, the plugins that have been updated to 68 compatibility will not work with 60.x, which means you better have a backup of your .thunderbird profile folder or you’re going to be filled with seething rage and you’ll have to undo the update. This misery is the consequence of Mozilla having failed to fully uphold their obligation to the user and developer communities that rely on and have enhanced the tools they control.

BTW: if you’re using Firefox and miss the plugins that made it more than just a crappy clone of Chrome, Waterfox is great and actually respects users and community  developers.  Give it a try.

Avoid Thunderbird 68 Hell

To avoid this problem now and in the future, you have to disable automatic updates.  In Thunderbird: Edit->Preferences->Advanced->General-[Config Editor…]->app.update.auto=False, app.update.enabled=False.

Screenshot of no update prefs thunderbird

On Linux, you should also disable OS Updates using Synaptic: select installed thunderbird 60.x and then from the menu bar Package->Lock Version.

screenshot of locking package in synaptic

If you’ve been surprise updated to the catastrophically incompatible developer vanity project and massive middle finger to the plugin developer community which is 68 (and 60 to a lesser extent), then you have to revert.  This sucks as 60.x isn’t in the repos.

Undo Thunderbird 68 Hell

First, do not run 68.  Ever.  Don’t.  It will cause absolute chaos with your plugins.  First it showed most incompatible, then updated some, then showed others compatible, but had deleted the .xpi files so they weren’t in the .thunderbird folder any more, despite being listed and shown incorrectly as compatible.  This broke some things I could live without like Extra Format Buttons, but others I really needed like Dorando Keyconfig and Sieve.  Mozilla’s attitude appears to be “if you’re using software differently than we think you should, you’re doing it wrong.”

The first step before breaking things even more is to backup your .thunderbird directory.  You can find the location from Help->Troubleshooting Information->Application Basics->Profile Directory.  Just click [Open Directory].  Make a backup copy of this directory before doing anything else if you don’t already have one, in linux a command might be:

tar -jcvf thunderbird_profile_backup.tar.bz2 .thunderbird

If you’re running Windows, old installers of TB are available here.

In Linux, using a terminal, see what versions are available in your distro:

apt-cache show thunderbird

I see only 1:68.2.1+build1-0ubuntu0.18.04.1 and 1:52.7.0+build1-0ubuntu1. Oh well, neither is what I want. While in the terminal uninstall Thunderbird 68

sudo apt-get remove thunderbird

As my distro, Mint 19.2, only has 68.x and 52.x in the apt cache, I searched to find a .deb file of a recent version.  I couldn’t find the last plugin compatible version, 60.9.0 as an easy to install .deb (though it is available for manual install from Ubuntu) so I am running 60.8.0, which works.  One could download the executable file of 60.9.1 .and put it somewhere (/opt, say) and then update start scripts to execute that location.

I found the .deb file of 60.8.0 at this helpful historical repository of Mozilla installers.  Generally the GUI will auto-install on execution of the download.  But don’t launch it until you restore your pre-68 .thunderbird profile directory or it will autocreate profile files that are a huge annoyance.  If you don’t have a pre-68 profile, you will probably have to hunt down pre-68 compatible versions of all of your plugins, though I didn’t note any catastrophic profile incompatibilities (YMMV).

Good luck. Mozilla just stole a day of your life.

Read more…

Posted at 07:51:32 GMT-0700

Category: HowTotechnology

Frequency of occurrence analysis in LibreOffice

Tuesday, November 19, 2019 

One fairly common analytic technique is finding out, for example, the rate at which something appears in a time referenced file, for example a log file.

Lets say you’re looking for the rate of some reported failure to determine, say, whether a modification or update had made it better or worse.  There are log analysis tools to do this (like Splunk), but one way to do it is with a spreadsheet.

Assuming you have a table with time in a column (say A) and some event text in another (say B) like:

11-19 16:51:03 a bad error happened
11-19 16:51:01 something minor happened
11-19 12:51:01 cat ran by

you might convert that event text to a numerical value (into, say, column C) for example by:

=IF(ISERROR(FIND("error",B1)),"",1)

FIND returns true if the text (“error”) is found, but returns an error if not. ISERROR inverts that and returns logical values for both. The IF ISERROR construction allows one to specify values if the text is found or not – a bit complex but the result in this case will be “” (blank) if “error” isn’t found in B1 and 1 if “error” is found.

Great, fill down and you have a new column C with blank or 1 depending if “error” was found in column B. Summing column C yields the total number of lines in which the substring “error” occurred.

But now we might want to make a histogram with a sum count of the occurrences within a specific time period, say “errors/hour”.

Create a new column, say column D, and fill the first two rows with date/time values separated by the sampling period (say an hour), for example:

11/19/2019 17:00:00
11/19/2019 16:00:00

And fill down; there’s a quirk where LibreOffice occasionally loses one second, which looks bad. It probably won’t meaningfully change the results, but just edit the first error, then continue filling down.

To sum within the sample period use COUNTIFS in (say) column E to count the occurrences in entire columns that meet a string of criterion: in this case three criterion have to be met: the value of C is 1 (not “”), The value of A (time) is before the start of the sampling period (D1) and after the end (D2). That is:

=COUNTIFS(C1:C500,"1",$A1:$A500,">="&$D2,$A1:$A500,"<"&$D1)

Filling this formula down populates the sampling periods with the count per sampling period, effectively the occurrence rate per period, in our example errors/hour.

Posted at 08:36:41 GMT-0700

Category: HowTotechnology

Lets encrypt with security/dehydrated (acme-client is dead)

Thursday, June 27, 2019 

Well….  security/acme-client is dead.  That’s sad.

Long live dehydrated, which uses the same basic authentication method and is pretty much a drop in replacement (unlike scripts which use DNS authentication, say).

In figuring out the transition, I relied on the following guides:

If you’re migrating from acme-client, you can delete it (if you haven’t already)

portmaster -e acme-client

And on to installation.  This guide is for libressl/apache24/bash/dehydrated.  It assumes you’ve been using acme-client and set it up more or less like this.

Installation of what’s needed

if you don’t have bash installed, you will. You can also build with ZSH but set the config before installing.

cd /usr/ports/security/dehydrated && make install clean && rehash

or

portmaster security/dehydrated

This guide also uses sudo, if it isn’t installed:

cd /usr/ports/security/sudo && make install clean && rehash

or

portmaster /security/sudo

Set up directories and accounts

mkdir -p /var/dehydrated
pw groupadd -n _letsencrypt -g 443
pw useradd -n _letsencrypt -u 443 -g 443 -d /var/dehydrated -w no -s /nonexistent
chown -R _letsencrypt /var/dehydrated

If migrating from acme-client this should be done but:

mkdir -p -m 775 /usr/local/www/.well-known/acme-challenge
chgrp _letsencrypt /usr/local/www/.well-known/acme-challenge

# If migrating from acme-client

chmod 775 /usr/local/www/.well-known/acme-challenge
chown -R _letsencrypt /usr/local/www/.well-known

Configure Dehydrated

ee /usr/local/etc/dehydrated/config

add/adjust

014 DEHYDRATED_USER=_letsencrypt

017 DEHYDRATED_GROUP=_letsencrypt

044 BASEDIR=/var/dehydrated

056 WELLKNOWN="/usr/local/www/.well-known/acme-challenge"

065 OPENSSL="/usr/local/bin/openssl"

098 CONTACT_EMAIL=gessel@blackrosetech.com

save and it should run:

su -m _letsencrypt -c 'dehydrated -v'

You should get roughly the following output:

# INFO: Using main config file /usr/local/etc/dehydrated/config
Dehydrated by Lukas Schauer
https://dehydrated.io

Dehydrated version: 0.6.2
GIT-Revision: unknown

OS: FreeBSD 11.2-RELEASE-p6
Used software:
bash: 5.0.7(0)-release
curl: curl 7.65.1
awk, sed, mktemp: FreeBSD base system versions
grep: grep (GNU grep) 2.5.1-FreeBSD
diff: diff (GNU diffutils) 2.8.7
openssl: LibreSSL 2.9.2

File adjustments and scripts

by default it will read /var/dehydrated/domains.txt for the list of domains to renew

Migrating from acme-client? Reuse your domains.txt, the format is the same.

mv /usr/local/etc/acme/domains.txt /var/dehydrated/domains.txt

Create the deploy script:

ee /usr/local/etc/dehydrated/deploy.sh

The following seems to be sufficient

#!/bin/sh

/usr/local/sbin/apachectl graceful

and make executable

chmod +x /usr/local/etc/dehydrated/deploy.sh

Give the script a try:

/usr/local/etc/dehydrated/deploy.sh

This will test your apache config and that the script is properly set up.

There’s a bit of a pain in the butt in as much as the directory structure for the certs changed. My previous guide would put certs at /usr/local/etc/ssl/acme/domain.com/cert.pem, this puts them at /var/dehydrated/certs/domain.com

Check the format of your certificate references and use/adjust as needed. This worked for me – note you can set your key locations to be the same in the config file, but the private key directory structure does change between acme-client and dehydrated.

sed -i '' "s|/usr/local/etc/ssl/acme/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-vhosts.conf

Or if using httpd-ssl.conf

sed -i '' "s|/usr/local/etc/ssl/acme/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-ssl.conf

And privkey moves from /usr/local/etc/ssl/acme/private/domain.com/privkey.pem to /var/dehydrated/certs/domain.com/privkey.pem so….

sed -i '' "s|/var/dehydrated/certs/private/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-vhosts.conf

# or

sed -i '' "s|/var/dehydrated/certs/private/|/var/dehydrated/certs/|" /usr/local/etc/apache24/extra/httpd-ssl.conf

Git sum certs

su -m _letsencrypt -c 'dehydrated --register --accept-terms'

Then get some certs

su -m _letsencrypt -c 'dehydrated -c'

-c is “chron” mode which is how it will be called by periodic.

and “deploy”

/usr/local/etc/dehydrated/deploy.sh

If you get any errors here, track them down.

Verify your new certs are working

cd /var/dehydrated/certs/domain.com/
openssl x509 -noout -in fullchain.pem -fingerprint -sha256

Load the page in the browser of your choice and view the certificate, which should show the SHA 256 fingerprint matching what you got above.  YAY.

Automate Updates

ee /etc/periodic.conf

insert the following

weekly_dehydrated_enable="YES"
weekly_dehydrated_user="_letsencrypt"
weekly_dehydrated_deployscript="/usr/local/etc/dehydrated/deploy.sh"
weekly_dehydrated_flags="-g"

note the flag is –keep-going (-g) Keep going after encountering an error while creating/renewing multiple certificates in cron mode

Posted at 11:38:45 GMT-0700

Category: FreeBSDSecuritytechnology