import streamlit as st # Custom CSS for better styling st.markdown(""" """, unsafe_allow_html=True) # Title st.markdown('
Automatic Language Detection Using Spark NLP in Python
', unsafe_allow_html=True) # Introduction st.markdown("""

Language detection is a critical component of Natural Language Processing (NLP), which involves automatically identifying the language of a given piece of text. This functionality is essential in various multilingual applications where the language of input text might not be known in advance. Accurate language detection can enhance the performance of downstream NLP tasks such as machine translation, sentiment analysis, and information retrieval.

""", unsafe_allow_html=True) # What is Language Detection st.markdown('
What is Language Detection?
', unsafe_allow_html=True) st.markdown("""

Language detection models analyze text to determine its language by examining features such as:

Models are typically trained on extensive datasets (e.g., Wikipedia, Tatoeba) using statistical and deep learning methods to recognize these patterns. Once trained, these models can predict the language of new text by comparing its features with those learned during training.

""", unsafe_allow_html=True) # Importance and Use Cases st.markdown('
Importance and Use Cases
', unsafe_allow_html=True) st.markdown("""

Accurate language detection is pivotal for many applications, including:

""", unsafe_allow_html=True) # Spark NLP's LanguageDetectorDL st.markdown('
Spark NLP\'s LanguageDetectorDL
', unsafe_allow_html=True) st.markdown("""

The LanguageDetectorDL annotator from Spark NLP is designed for high accuracy in language detection. It utilizes pretrained deep learning models to identify languages with precision. This annotator can effectively handle documents containing mixed languages by analyzing sentence segments and selecting the most probable language.

""", unsafe_allow_html=True) # Setup Instructions st.markdown('
Setup
', unsafe_allow_html=True) st.markdown('

To install Spark NLP and extract keywords in Python, simply use your favorite package manager (conda, pip, etc.). For example:

', unsafe_allow_html=True) st.code(""" pip install spark-nlp pip install pyspark """, language="bash") st.markdown('

For other installation options and environments, refer to the official documentation.

', unsafe_allow_html=True) st.markdown("

Then, import Spark NLP and start a Spark session:

", unsafe_allow_html=True) st.code(""" import sparknlp # Start Spark Session spark = sparknlp.start() """, language='python') # Using LanguageDetectorDL st.markdown('
Using LanguageDetectorDL
', unsafe_allow_html=True) st.code(""" # Import necessary modules from sparknlp.base import DocumentAssembler, Pipeline from sparknlp.annotator import LanguageDetectorDL import pyspark.sql.functions as F # Step 1: Transform raw text into `document` annotation document_assembler = ( DocumentAssembler() .setInputCol("text") .setOutputCol("document") ) # Step 2: Detect the language of the text language_detector = ( LanguageDetectorDL.pretrained() .setInputCols("document") .setOutputCol("language") ) # Create the NLP pipeline nlpPipeline = Pipeline(stages=[document_assembler, language_detector]) # Sample texts in different languages data = spark.createDataFrame([ ["Spark NLP is an open-source text processing library for advanced natural language processing for the Python, Java and Scala programming languages."], ["Spark NLP est une bibliothèque de traitement de texte open source pour le traitement avancé du langage naturel pour les langages de programmation Python, Java et Scala."], ["Spark NLP ist eine Open-Source-Textverarbeitungsbibliothek für fortgeschrittene natürliche Sprachverarbeitung für die Programmiersprachen Python, Java und Scala."], ["Spark NLP es una biblioteca de procesamiento de texto de código abierto para el procesamiento avanzado de lenguaje natural para los lenguajes de programación Python, Java y Scala."], ["Spark NLP é uma biblioteca de processamento de texto de código aberto para processamento avançado de linguagem natural para as linguagens de programação Python, Java e Scala"] ]).toDF("text") # Transform the data with the pipeline result = nlpPipeline.fit(data).transform(data) # Show the results result.select("text", "language.result").show(truncate=100) """, language='python') st.text(""" +----------------------------------------------------------------------------------------------------+------+ | text|result| +----------------------------------------------------------------------------------------------------+------+ |Spark NLP is an open-source text processing library for advanced natural language processing for ...| [en]| |Spark NLP est une bibliothèque de traitement de texte open source pour le traitement avancé du la...| [fr]| |Spark NLP ist eine Open-Source-Textverarbeitungsbibliothek für fortgeschrittene natürliche Sprach...| [de]| |Spark NLP es una biblioteca de procesamiento de texto de código abierto para el procesamiento ava...| [es]| |Spark NLP é uma biblioteca de processamento de texto de código aberto para processamento avançado...| [pt]| +----------------------------------------------------------------------------------------------------+------+ """) # One-Liner Alternative st.markdown('
One-Liner Alternative
', unsafe_allow_html=True) st.markdown("""

John Snow Labs has introduced a unified library to simplify workflows across various products, including Spark NLP. Install the library with:

pip install johnsnowlabs

Use the following one-liner code for quick language detection:

""", unsafe_allow_html=True) st.code(""" # Import the NLP module which contains Spark NLP and NLU libraries from johnsnowlabs import nlp # Sample text in Polish sample_text = "Spark NLP to biblioteka edytorów tekstu typu open source do zaawansowanego przetwarzania języka naturalnego w językach programowania Python, Java i Scala." # Detect language with one line of code result = nlp.load('xx.classify.wiki_95').predict(sample_text, output_level='sentence') """, language='python') st.markdown("""
Language Confidence Sentence
pl 9.0 Spark NLP to biblioteka edytorów tekstu typu open source do zaawansowanego przetwarzania języka naturalnego w językach programowania Python, Java i Scala.
""", unsafe_allow_html=True) st.markdown("""

Benefits of the One-Liner

This approach is convenient for quick implementations and testing. The one-liner model is based on default configurations, which may suffice for general use cases. However, for more specialized needs, customizing the pipeline or choosing specific models might be necessary.

""", unsafe_allow_html=True) # Notes and Recommendations st.markdown('
Notes and Recommendations
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) # Benchmarking Section st.markdown('
Benchmarking
', unsafe_allow_html=True) st.write("") st.markdown('

ld_wiki_tatoeba_cnn_375 Model Evaluated on Europarl dataset which the model has never seen:

', unsafe_allow_html=True) st.text(""" +--------+-----+-------+------------------+ |src_lang|count|correct| precision| +--------+-----+-------+------------------+ | fr| 1000| 1000| 1.0| | de| 1000| 999| 0.999| | fi| 1000| 999| 0.999| | nl| 1000| 998| 0.998| +-------+--------------------+ | el| 1000| 997| 0.997| |summary| precision| | en| 1000| 995| 0.995| +-------+--------------------+ | es| 1000| 994| 0.994| | count| 21| | it| 1000| 993| 0.993| | mean| 0.9758952066282511| | sv| 1000| 991| 0.991| | stddev|0.029434744995013935| | da| 1000| 987| 0.987| | min| 0.8862144420131292| | pl| 914| 901|0.9857768052516411| | max| 1.0| | hu| 880| 866|0.9840909090909091| +-------+--------------------+ | pt| 1000| 980| 0.98| | et| 928| 907|0.9773706896551724| | ro| 784| 766|0.9770408163265306| | lt| 1000| 976| 0.976| | bg| 1000| 965| 0.965| | cs| 1000| 945| 0.945| | sk| 1000| 944| 0.944| | lv| 916| 843|0.9203056768558951| | sl| 914| 810|0.8862144420131292| +--------+-----+-------+------------------+ """) # Conclusion st.markdown("""

Conclusion

Accurate language detection is a foundational step in many NLP workflows. Spark NLP’s LanguageDetectorDL annotator offers a robust solution for detecting languages in diverse text corpora. With its integration into Spark's powerful data processing framework, it enables efficient handling of large-scale multilingual datasets, providing accurate language identification for various applications.

""", unsafe_allow_html=True) # References and Additional Information st.markdown('
References and Additional Information
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True) st.markdown('
Community & Support
', unsafe_allow_html=True) st.markdown("""
""", unsafe_allow_html=True)