Textual inference rules such as X wrote Y <=> Y is the author of X are relevant to many natural language processing tasks, such as information retrieval, question answering, textual entailment. Unsupervised methods for extracting inference rules from text and such automatically created collections (e.g. DIRT) have been around since early 2000. These algorithms assume the Distributional Hypothesis which states that words that occur in similar contexts have similar meaning. Recent work focuses on making such resources more accurate, in order to eliminate incorrect rules and to refine the plausible ones. Work on attaching selectional preferences to inference rules focuses on determining the context in which a rule can be applied. This together with other refinements such as determining the correct entailment direction of asymmetric rules aim at improving the way such rules are used in real applications, task which is still a challenge. My talk will focus on presenting the relevant literature, while pointing out future work directions.