• Category
  • >Artificial Intelligence
  • >NLP

What is Google’s Open Source Language Interpretability Tool?

  • Neelam Tyagi
  • Aug 31, 2020
What is Google’s Open Source Language Interpretability Tool? title banner

Being an arena from linguistics, Computer Science and Artificial Intelligence, Natural language processing is touched with the interplays amidst computers and human languages, in particulars, how to process computers for executing and scrutinising copious amounts of natural language data. Ample advanced techniques are followed in the direction to understand natural language models. 

 

Nevertheless, NLP models yield unprecedented accomplishment by the time when much advancement takes place in designing and modelling in them, however, developers are experimenting consistently with several techniques to address some unanswered explanations that are necessities to define the behaviour model. 

 

The perception that swapping amid tools or embracing novel method from research code will consume time, can’t be denied. Therefore, for the seamless and correlative workflow to be ideal, developers should decipher the data, what and why the model can do with it, follow that they can examine the hypothesis and make an understanding with the model. (Recommended blog: 7 Natural Language Processing Techniques for Extracting Information)


 

On the same note, Google has introduced the Language Interpretability Tool(LIT), a toolkit and UI counted on the browser, that is also bestowed in during this blog.


 

What is Language Interpretabiltiy Tool(LIT)?

 

Google AI researchers have built the Language Interpretability Tool (LIT) that is an open-source platform that lets users envision, understand and inspecting natural language processing models for developers.

 

LIT, a toolkit and browser-based user interface helps in many tasks like local explanations, deep visualization of model forecast along with an assemblage interpretation covering metrics, embedding spaces, and flexible slicing.

 

In accordance with the paper published, LIT underpins a broad range of model types and techniques and is devised for extensibility via easy, framework-agnostic APIs.


The picture is revealing the Language Interpretaility Tool(LIT) user interface(UI), showing a fine-tuned BERT model.

The Language Interpretaility Tool(LIT) UI; Pic credit


Basically, LIT concentrates over AI models to provide answers to deep questions regarding their behaviour such as; 

  • Why do AI models make precise prognostications,

  • Can these prognostications be associated with adversative behaviour, or

  • What could be plausible priors and to undesirable them inside the training set.

 

Albeit the LIT is under agile progress, the code and installation of it are provided at Github along with full LIT documentations.

 

According to researchers of the paper; research work progress is unfolding steadily, and LIT is built under the consideration of the following principles;

  • Flexible: The tool strengthens various NLP tasks that are classification, seq2seq, language modelling, and structured prediction. 

  • Extensible: It is mainly composed for experimentation, and could be reconstructed and prolonged for innovative workflows. 

  • Modular: The interpretation elements are self-sustaining, manageable, and mere to execute. 

  • Framework agnostic: LIT can work with any model that can be governed from Python including TensorFlow, PyTorch, etc.

  • Simple to adopt: LIT comprises a small obstacle in order to approach with only a tiny volume of code that is required to unite models and data. 

 

Limitations: 

 

  1. As LIT is an evaluation tool, it is not beneficial to implement for time-training monitoring. Also, LIT is designed to be interactive, keeping this in mind, it can’t compute large-scaled datasets as well as offline tools, like TFMA. At the present time, LIT user interface can manage 10000 examples at a time. 

 

  1. Being a framework-agnostic; it doesn’t hold the deep model integration of tools, like AllenNLP Interpret, or Captum. It makes things easier and convenient, however, it demands extra code for some techniques, like Integrated Gradients, necessitates to manage model’s part.


 

What are specifications of LIT?

 

The listing below the fascinating specification that Google’s Language Interpretability Tool embraces; 

  1. LIT is an open-source platform under an Apache 2.0 license.

  2. LIT estimates and presents metrics data sets at its entirety in order to notoriety patterns, marked in model performance.

  3. LIT reinforces various natural language processing tasks including language modelling, classification, and fabricated foresight.

  4. It can be operated with a model that drives from Python including TensorFlow, PyTorch, and remote models across a server.

  5. LIT lets bilateral interpretation not only at the single data point stage but also across an entire dataset along with superior assistance for counterfactual generation and assessment.

  6. LIT can be used to investigate how do language models incorporate input and anticipate how communication proceeds, detecting biases and tendencies.


The image is manifesting the in-built modules inside Google's Language Interpretability Tool(LIT)

In-built modules in Language Interpretability Tool


  1. The LIT UI is scripted in TypeScript language and interacts with a Python backend that entertains models, datasets, counterfactual generators, and other analysis elements. 

  2. The browser-based user interface is a unique web app, designed by lit-element and MobX. The Python backend assists NLP models, data, and analysis elements.

  3. LIT enables savvy developers to examine and figure out how their AI model behaves and the reason being they may grapple in some cases. 

 

 

Benefits of LIT

 

LIT can assist developers in various ways amazingly, some ways are described below;

 

  1. Examine the dataset: Users can scrutinize the dataset, with LIT, though the usage of various modules such as data table and implanted modules.

  2. Explore data points: Through this tool, NLP developers can identify the compelling data points that are required for analysis and will get data insights for future anticipation. Also, it yields preferences to use in future.

  3. Making novel data points: Through LIT, depending upon the data points of interest, developers can generate novel data points either manually via editing or through various automated counterfactual generators, for example, back-translation, nearest-neighbour retrieval.

  4. Correlate adjoining: With LIT, developers can compare two or more NLP models at a time, also on the same data. It can let them contrast an individual model on two data points concurrently. 

  5. Reckon metrics: This tool enables developers to calculate and reflect metrics for the entire dataset, prevailing adoptions and generated portions, either manually or automatically, in order to identify patterns in the model performance. 

  6. Define local behaviour: With LIT, developers can analyze the behaviour of a model on elected specific data points by a range of modules, rely on the type and task of the model. (From)


 

Conclusion

 

This blog, I hope, is worthy in contributing the immense description for LIT, an open-source platform that allows developers to visualize and understand NLP models. It can be concluded that the Language Interpretability Tool, Google has introduced, gives a unified user interface and an order of components for envisioning and examining the behaviour of NLP models. 

 

Regardless of the fact that it is beneath vigorous expansion under a small team, LIT maintains a distinct bandwidth of workflows from explaining peculiar foresight and detailed analysis to exploring for bias by counterfactuals. The all-inclusive convenience of Google’s automated speech recognition insinuates LIT might be pragmatic for several organizations in regulating their assistants’ interactions.

Latest Comments