Wednesday, January 23, 2013

Some Sieve rules which I used in the past

Here is a list of Sieve rules which I used in the past.
Some people do not see a use for managing your incoming email automatically but I always found it useful, I guess it depends on your style of working.

At Sun Microsystems we used to run email on Sun's own products (Sun Java System Messaging Server) and there was a web interface to create Sieve rules since the normal interface did not provide enough flexibility sometimes.


The examples below use one important feature: each rule ends with a 'stop;' which means that if the rule is applicable and the action executed then no further filter rule parsing is applied. The email will be moved to the target folder.
If you want the capability to duplicate emails in different folders then simply omit the 'stop;' and the email will be passed to the next filter.

Filter on recipient (To, CC, ...)

I used this to filter emails going to maillists like '...-interest'.

recipient: abc-interest
target folder: abc-stuff

# RULE: email to maillist abc-interest
require "fileinto";
if anyof(header :contains ["To","Cc","Bcc","Resent-to","Resent-cc","Resent-bcc"] "abc-interest"){
fileinto "abc-stuff";
stop;
}

Filter on sender

I used this to filter emails from my manager. This works of course only in this simple way if your managers last name is fairly unique.

sender: Beard
target folder: FromLarry

#RULE: email from manager
require "fileinto";
if anyof(header :contains ["From","Sender","Resent-from","Resent-sender","Return-path"] "Beard"){
fileinto "FromLarry";
stop;
}

Filter on subject

I used this to filter emails which would occur from time to time (or regularily) sent from some automated script. Note that it uses '*' for pattern matching.

subject: rsync *
target folder: rsync

#RULE: filter all emails from the rsync daemon
require "fileinto";
if anyof(header :matches ["Subject","Comments","Keywords"] "rsync *"){
fileinto "rsync";
stop;
}

A rule to reject large emails

Since I hate large email attachments - and even hated them more years ago when there were more restrictions on capacity (I started with a free email account of 10MB) and performance - so the reason for this rule.
I wanted to remind people nicely to reconsider their email behaviour.
In fact the rule is set so that the emails do arrive and are not rejected (notice the missing 'stop;')

#RULE: Reject_large_emails
require "vacation";
if size :over 2M {
   vacation "Dear sender,
I'm sorry but I do not accept email over 2MB in size.
Please upload larger files to a server and send me a link.

Thanks.
Andreas";
}

Filter on a combination of subject and sender

This is a refinement of the 'subject' rule above. If you have e.g. cron jobs on some servers which regularily (or better only in case of errors) send emails to alert staff then this filter comes in quite handy (assuming that the sender of the email is e.g. some kind of specially named admin account).

sender: canary
subject: rsync *
target folder: ARCHIVE/ITSM/rsync

#RULE: filter on sender 'canary' and subject 'rsync'
require "fileinto";
if anyof(header :contains ["From","Sender","Resent-from","Resent-sender","Return-path"] "canary",
header :matches ["Subject","Comments","Keywords"] "rsync *"){ 
fileinto "rsync";
}

Of course none of these rules is perfect in the sense that it won't pick a wrong email occasionally but they served me well in the past.
For example since I was based in Europe and my managers were in the US first thing in the morning was to check the folder with emails from my manager (assuming that managers emails have a certain priority which of course anyone can question :-) )


If a mail server supports the ManageSieve protocol one can use email clients (e.g. the Sieve add-on for Thunderbird) or command line tools to create Sieve filters.

2 comments:

  1. Harvard Business Review named data scientist the "sexiest job of the 21st century".This Data Science course will cover the whole data life cycle ranging from Data Acquisition and Data Storage using R-Hadoop concepts, Applying modelling through R programming using Machine learning algorithms and illustrate impeccable Data Visualization by leveraging on 'R' capabilities.With companies across industries striving to bring their research and analysis (R&A) departments up to speed, the demand for qualified data scientists is rising.

    data science training in bangalore

    ReplyDelete
  2. myTectra offers Big Data and Hadoop training in Bangalore using Class Room.
    myTectra offers Live Online Big Data and Hadoop training Globally.
    Big Data and Hadoop training Unlike traditional systems, Big Data and Hadoop enables multiple types of analytic workloads to run on the same data, at the same time, at massive scale on industry-standard hardware.myTectra Big Data and Hadoop training is designed to help you become a expert Hadoop developer. myTectra offers Big Data Hadoop Training in Bangalore using Class Room. myTectra offers Live Online Big Data and Hadoop training Globally.

    python online training

    ReplyDelete