Simon Braß

Home-Office Physicist
Atom Icon - atom by Erin Agnoli from the Noun Project

Apple MacOS: Spotlight

I'm new to the wonderous world of Apple Mac products. And, with an instant, I describe my experience as:

If you really do not want to care about your work/computing environment, then, an Apple will be fine.

However, I deeply care for my work environment, I'm always trying to improve my tooling. Hence, why I prefer to use a Linux-based environment, e.g. Archlinux or Linux From Scratch. On the other side, Apple's idiom seems to be that they will care for ones tooling, not oneself. And, I have some issues with such an idiom and its consequences:

  1. I do not know what is running.
  2. I do not know when it is runnning.
  3. I do not know how to fix it - the documention is hard to reach.

And that is my problem with Spotlight, it's a fancy index search/program starter.

At hand, it's indexing my heavy working directory, e.g., where I place millions of files every day - in that, AppleFS would be worth another story. How did I find out that I have a problem with Spotlight? Thousands of running threads, heavy CPU load, etc, which throttle the performance of my MacBook. But, the only thing I know is a service name mdworker_shared. Favorite search engine to my help!

So, Spotlight runs an indexing services mdworker_shared in the background, which would be fine for a standard user with a low increasing number of files. Spotlight allows to exclude directories/files under the name of data privacy, and with that, my device is running smoothly (again).

PS: I also needed to exclude my working directory from Time Machine…

Postfix: Smarthost and Strato

<2021-10-01 Fri>

You may or may not have noticed that I moved my webpage away from DESY as I left DESY for my job at Immutable Insight with the beginning of September. With that change at hand, I used the chance to get myself an own domain: https://phibra.de - with all the bells and whistles.

So far, I'm quite happy with my new setup - I may elaborate on the details of it in a future article. So now, I combine Amazon S3 storage as a static webhost with Cloudflare's caching to slash costs, and Strato as domain reseller and mail server provider.

If you would ask me now what is the biggest earn of the new setup? I would answer that I can specify an own MX record for my domain. No dependence on whatever mail service you could call out. However, setting up and running a mail server is a pain in the ass. Therefore, I lied when I said that I do not depend on any mail service server. I use Strato's mail server. Especially, I relay every mail from my private servers using Postfix to the Strato mail server.

Lo and behold, like everytime I work on a mailing system, there are some issues with it and I have seldomly a clue why. I mean I know what I am doing and I really try to understand the errors, but understanding and fixing mail errors is sometimes like hell on earth.

With some little help from my favorite web search engine, I came up with the solution - in the end: First, we need to inspect our connection with Strato's mail server, where I used the great instructions from Steven Rombrauts. As we use TLS, we spin up openssl and connect to the mail server using the default SMTP port:

sudo openssl s_client -starttls smtp -connect smtp.strato.de:465

No chance to connect to the Strato server, connection closed.

First thing then, I checked with the options -state -debug the details of the connection. And, I needed to fix my certificate chain:

sudo openssl s_client -starttls smtp -connect smtp.strato.de:587 \
     -state -debug \
     -cert /etc/letsencrypt/live/<DOMAIN>/cert.pem \
     -key /etc/letsencrypt/live/<DOMAIN>/privkey.pem

After fixing (?) my certificate chain, I still could not connect to the mail server.

And then, it hit me! Strato configuration !!!

I had to read that page twice - mea culpa, I didn't expect that information to be lying around in plain view:

Bitte beachten Sie: Bei der Verwendung als Smarthost (Relay), bspw. über Exchange 2003, verwenden Sie bitte die Standardauthentifizierung per TLS (STARTTLS) über den Port 587 (alternativ Port 25). Zur Authentifizierung verwenden Sie bitte wie gewohnt Ihre E-Mail-Adresse sowie das dazugehörige E-Mailpasswort.

Die Nutzung als offenes Mail-Relay ist nicht möglich, da ausschließlich das SMTP-Auth-Verfahren unterstützt wird.

Yep, yep, I used the port 465. Changing port 465 to 587 did the trick.


WHIZARD: UFO restrictions

<2021-07-26 Mon>

WHIZARD has the option to pass model restrictions to the tree-level matrix generator O'Mega. This options allows us to manipulate the production of the amplitudes beyond the simple process definition of WHIZARD.

The following statements are allowed (including a logical and-operator && to combine several restrictions):

  1. Explicit selection of a propagator, 3 + 4 ~ Z,
  2. Exclusion of a propagator or list of propagators, !A or !e:nue,
  3. Exclusion of a coupling constant or list of coupling constants, ^qlep:gnclep,
  4. Exclusion of specific vertix or list of vertices, ^[A:Z,W+,W-],
  5. Exclusion of a specific vertix or list of vertices with a coupling constant or list of couplings, ^c1:c2:c3[H, H, H].

The examples are taken from the WHIZARD manual.

The restrictions feature becomes quite handy for UFO models, where we can remove unnecessary amplitude terms from our computation and spare computational resources. I want to note that setting a coupling constant to zero also avoids the computation of a term (but not the code production), however, at the cost of an additional if-condition at each term execution (at least for O'Mega).

We require the coupling constant name for the restrictions. But, what are the coupling constants in a UFO model? I can already state that coupling constant does not equal independent model parameter! But then, how are the coupling constants connected to the model parameters? Fortunately, each UFO model provides a couplings.py file with following repeating structure:

GC_1 = Coupling(name = 'GC_1',
                value = '-(ee*complex(0,1))/3.',
                order = {'QED':1})

As the number of couplings can be quite huge, I want to have an automate solution: First, I need to can scan coupling.py with grep for my model parameter (in value) and then select the before line (of our grep) and massage the output a little bit to have a list of the form a:b:c:d.... The result is then:

grep --before-context 1 FT0 SM_Ltotal_Ind5v2020v2_UFO/couplings.py | \
    grep Coupling | \
    cut -d' ' -f1 | \
    tr '\n' ':'

Next, I just need to append this to a file and include it (with some manual edit) into my Sindarin files.

Back to business: J'ai bien d'autres chats à fouetter.

Matplotlib Cheatsheet

<2021-06-09 Wed>

Today, I came across the magnificent cheatsheet from Matplotlib: https://github.com/matplotlib/cheatsheets#cheatsheets.

And, it is awesome! In general, Matplotlib is great, the API, the documentation, the tutorials, the examples, and so on. The developers and the community put a lot of effort into the project, and that effort is showing off.

The cheatsheet is rather new, the earliest commit in the Github repository is from <2020-06-25 Thu>, hence, the existence of the cheatsheet got me by surprise, today.

However, I can tell you the design and content of the cheatsheet goes very well. It is a great addition to just look at and draw some inspiration.

Update: Although, there has been submissions to Hackernews (https://hn.algolia.com/?q=matplotlib+cheatsheet), none of the submissions took off (having enough points to reach the "front" page). I should crawl the new page more often.

Theory Cluster: Crontab and Mail Delivery

  • <2021-03-19 Fri>
  • <2021-03-22 Mon> Update

I'm a heavy user of scratch partitions, a place where we should point all I/O-heavy programs, such avoiding unnecessary network load on shared filesystems. However, the nature of the scratch partition is volatile - there will be no backups of it, no warnings about its status and so on. Thus, our data are merely existent. Therefore, it is important to have backups to other places, in my case the Theory-wide network filesystem (NFS). And, we need automation. Backups always need automation and an alert that something happened (or not)!

First, we have two choices to automate our backups:

  1. Crontab,
  2. Systemd timer.

Altough, I prefer systemd, in this case, I will go for simplicity, i.e. crontab (see info crontab). Second, we want to be notified that something happened - after sometime, when we know that everything works fine, we change this to something bad happened.

crontab -e
> 0 5 * * * rsync -a --delete --stats <data> <backup>

Then, checking on info sendmail, we see that the cluster has Postfix installed. Yay, we can sendmail. We only need to create a forward file ~/.forward for our user (see info local) that contains only a single line with our forward email address. We can then verify that it works with sendmail -t [ENTER] Hello World!!! [Ctrl+D].

Update: I'm not entirely sure whether we need to add a ~/.forward as the mail and user accounts are connected by LDAP. And, it's even a little bit more complicated, AFS provides the home directory. However, it seems to work without (access to) the local forward.


You reached the end of the frontpage!

You may want to look at my articles or the frontpage archive.

Simon Braß ([email protected])

Created: 23 Oct 2020 and last modified: 2021-10-14 Thu 11:20

Emacs 27.2 (Org mode 9.4.6)