Spaces:
Sleeping
Sleeping
Update app.py
Browse files
app.py
CHANGED
@@ -7,104 +7,120 @@ import math
|
|
7 |
# Constants
|
8 |
TOKEN_LIMIT = 1024 # Maximum tokens the model can handle
|
9 |
|
10 |
-
#
|
11 |
model_path = "sshleifer/distilbart-cnn-12-6"
|
12 |
text_summary = pipeline("summarization", model=model_path, torch_dtype=torch.bfloat16)
|
13 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
14 |
|
|
|
|
|
|
|
|
|
15 |
# Function to summarize text
|
16 |
-
def
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
summary_output = text_summary(
|
18 |
input_text,
|
19 |
-
min_length=min_length,
|
20 |
-
max_length=max_length
|
21 |
)
|
22 |
return summary_output[0]['summary_text']
|
23 |
|
24 |
-
#
|
25 |
-
|
26 |
-
|
27 |
-
Niteesh Nigam, a forward-thinking robotics engineer and AI developer, has consistently demonstrated his passion for innovation and his ability to transform complex technological concepts into impactful real-world solutions. With a Master’s degree in Robotics and Autonomous Systems from Arizona State University (ASU) and a Bachelor’s degree in Mechanical Engineering from the Birla Institute of Technology and Science, Pilani Dubai, Niteesh has cultivated a robust foundation in engineering, computer vision, and artificial intelligence. His interdisciplinary expertise and hands-on experience have made him a standout professional in fields such as robotics, machine vision, deep learning, and control systems.
|
28 |
-
|
29 |
-
### Educational and Technical Expertise
|
30 |
-
|
31 |
-
Niteesh’s academic journey is marked by excellence in courses that form the backbone of modern robotics and AI, including Machine Vision and Pattern Recognition, Applied Machine Learning, and Artificial Neural Computation. These courses have equipped him with an advanced understanding of statistical methods, machine learning algorithms, and their applications in robotics and autonomous systems.
|
32 |
-
|
33 |
-
His technical arsenal includes a diverse array of tools and frameworks like ROS2, Gazebo, Docker, PyTorch, TensorFlow, OpenCV, and AWS. Proficient in Python, MATLAB, and C++, Niteesh is skilled in designing and deploying solutions for robotics, automation, and data-driven systems. His hands-on approach is complemented by certifications, such as the ROS2 Level 2 certification and Stanford’s Machine Learning Specialization, which highlight his commitment to staying at the cutting edge of his field.
|
34 |
-
|
35 |
-
### Expertise in Robotics and Control Systems
|
36 |
-
|
37 |
-
Niteesh’s work in robotics exemplifies his ability to design, develop, and deploy innovative systems. His Autonomous Warehouse System Project is a testament to his expertise in robotic manipulation and motion planning. In this project, he integrated 3D point cloud-based object localization and barcode tracking with ROS2 and OpenCV to automate package management tasks. Using custom YAML and SDF models in Gazebo, Niteesh designed precise robotic motion with suction grippers, achieving seamless pick-and-place operations.
|
38 |
-
|
39 |
-
Additionally, his contributions to line-follower drone systems underscore his proficiency in control systems. Niteesh developed a wind-resistance module capable of handling up to 30 m/s winds, achieving 82% path accuracy. This work demonstrates his mastery of trajectory planning and his ability to merge simulation fidelity with real-world performance, enhancing stability and efficiency.
|
40 |
-
|
41 |
-
### Pioneering Work in Machine Vision and Deep Learning
|
42 |
-
|
43 |
-
Niteesh’s expertise in machine vision and deep learning is highlighted through groundbreaking projects like the YOLO-Based Unified Framework for Real-Time Vehicle Profiling. By leveraging CUDA during training, he accelerated model convergence, achieving an 83% profiling accuracy and 94% license plate recognition. His innovative approach combined EasyOCR, Deep SORT, and advanced color-matching algorithms to ensure a 90% tracking accuracy for vehicle profiling.
|
44 |
-
|
45 |
-
In the field of semantic segmentation, his project Evaluation of Encoder-Decoder Strategies explored architectures like ResNet50-PPM and EfficientNet-DeepLabV3. By optimizing neural networks for feature extraction, Niteesh ensured real-time segmentation speeds up to 15 fps with a mean IoU of 42.14%, setting benchmarks for accuracy and efficiency.
|
46 |
-
|
47 |
-
### Advancements in NLP and Large Language Models
|
48 |
-
|
49 |
-
Niteesh’s role as an AI Developer at YourBeat Inc. showcases his expertise in NLP and large language models (LLMs). He spearheaded the design of a PostgreSQL database strategy, consolidating multi-source data for accurate analysis and fine-tuning chatbot models. His work involved preprocessing over a million Reddit posts and YouTube transcripts using Python scripts in Dockerized environments, ensuring robust and scalable data pipelines.
|
50 |
-
|
51 |
-
Additionally, his research on chatbot platforms like Rasa and Microsoft Bot Framework has informed strategic AI development, enabling more intuitive and flexible conversational systems tailored to user needs.
|
52 |
-
|
53 |
-
### Noteworthy Projects and Patents
|
54 |
-
|
55 |
-
Niteesh’s innovation extends beyond academic and professional achievements. His Panorama Image Stitching Tool, developed using OpenCV, SIFT, and RANSAC, achieved a remarkable 95% stitching accuracy, enhancing image processing capabilities. By containerizing the tool with Docker and integrating a Flask web interface, he showcased his ability to combine backend efficiency with user-friendly interfaces.
|
56 |
-
|
57 |
-
Furthermore, Niteesh is a co-inventor of a Hand Grasp Stimulating Device, provisionally patented to assist patients with spinal cord injuries. This groundbreaking neuro-rehabilitation glove reflects his commitment to leveraging technology for improving lives.
|
58 |
-
|
59 |
-
### Professional Experience and Industry Contributions
|
60 |
-
|
61 |
-
Niteesh’s professional journey reflects his ability to drive impactful change. At YourBeat Inc., he managed end-to-end data acquisition, preprocessing, and analysis, building the foundation for advanced AI systems. His expertise in AWS, PostgreSQL, and GitHub ensured the seamless integration of tools and frameworks for efficient workflow management.
|
62 |
-
|
63 |
-
His tenure at KHK Scaffolding and Formwork Ltd. demonstrated his versatility as an engineer. By integrating robotic arms into welding automation, he improved weld quality by 15% and increased production output by 10%, saving over 200 man-hours monthly. His ability to lead cross-departmental initiatives highlights his collaborative and leadership skills.
|
64 |
-
|
65 |
-
### Vision for the Future
|
66 |
-
|
67 |
-
Niteesh Nigam’s work embodies a perfect blend of technical mastery, innovation, and a relentless drive to solve complex problems. Whether it’s optimizing robotic systems, advancing machine vision techniques, or harnessing the power of LLMs for conversational AI, Niteesh continues to push the boundaries of what’s possible in technology.
|
68 |
-
|
69 |
-
As he looks to the future, Niteesh remains committed to making meaningful contributions in robotics, AI, and automation, with a focus on scalable solutions that benefit industries and communities alike. His journey is a testament to the transformative potential of technology when guided by a visionary like him.
|
70 |
-
"""
|
71 |
-
|
72 |
-
# Precomputed Summary
|
73 |
-
summary_example = summarize_text(article_example, min_length=50, max_length=150)
|
74 |
|
75 |
# Create Gradio interface
|
76 |
with gr.Blocks() as demo:
|
77 |
-
gr.
|
78 |
-
gr.
|
79 |
-
|
80 |
-
)
|
81 |
|
82 |
-
|
83 |
-
|
|
|
|
|
84 |
|
85 |
-
# Button to Summarize
|
86 |
summarize_button = gr.Button("Summarize")
|
87 |
-
|
88 |
-
# Output Summary
|
89 |
summary_output = gr.Textbox(label="Summary Output", lines=10)
|
90 |
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
# Button click to summarize user-provided input
|
103 |
-
summarize_button.click(
|
104 |
-
summarize_text,
|
105 |
-
inputs=[input_text],
|
106 |
-
outputs=[summary_output]
|
107 |
)
|
|
|
108 |
|
109 |
# Launch the Gradio app
|
110 |
demo.launch(share=False)
|
|
|
7 |
# Constants
|
8 |
TOKEN_LIMIT = 1024 # Maximum tokens the model can handle
|
9 |
|
10 |
+
# Model path and pipeline initialization
|
11 |
model_path = "sshleifer/distilbart-cnn-12-6"
|
12 |
text_summary = pipeline("summarization", model=model_path, torch_dtype=torch.bfloat16)
|
13 |
tokenizer = AutoTokenizer.from_pretrained(model_path)
|
14 |
|
15 |
+
# Function to process and tokenize text
|
16 |
+
def tokenize_text(input_text):
|
17 |
+
return tokenizer.encode(input_text, truncation=False)
|
18 |
+
|
19 |
# Function to summarize text
|
20 |
+
def process_batches(input_text):
|
21 |
+
# Tokenize the input text
|
22 |
+
tokens = tokenize_text(input_text)
|
23 |
+
text_tokens = len(tokens)
|
24 |
+
if text_tokens < 0.05 * TOKEN_LIMIT:
|
25 |
+
return "Error: Text too small to summarize."
|
26 |
+
|
27 |
+
# Batch calculation
|
28 |
+
batches = math.ceil(text_tokens / TOKEN_LIMIT)
|
29 |
+
summaries = []
|
30 |
+
|
31 |
+
while batches > 1:
|
32 |
+
# Split text into approximately equal batches using sent_tokenize
|
33 |
+
sentences = sent_tokenize(input_text)
|
34 |
+
avg_batch_size = math.ceil(len(sentences) / batches)
|
35 |
+
text_batches = [
|
36 |
+
" ".join(sentences[i:i+avg_batch_size])
|
37 |
+
for i in range(0, len(sentences), avg_batch_size)
|
38 |
+
]
|
39 |
+
|
40 |
+
# Process each batch
|
41 |
+
batch_summaries = []
|
42 |
+
for batch in text_batches:
|
43 |
+
max_length = int((TOKEN_LIMIT / batches) * 0.9)
|
44 |
+
min_length = int((TOKEN_LIMIT / batches) * 0.5)
|
45 |
+
summary_output = text_summary(
|
46 |
+
batch,
|
47 |
+
min_length=min_length,
|
48 |
+
max_length=max_length
|
49 |
+
)
|
50 |
+
batch_summaries.append(summary_output[0]['summary_text'])
|
51 |
+
|
52 |
+
# Stitch all batch summaries
|
53 |
+
input_text = " ".join(batch_summaries)
|
54 |
+
tokens = tokenize_text(input_text)
|
55 |
+
text_tokens = len(tokens)
|
56 |
+
batches = math.ceil(text_tokens / TOKEN_LIMIT)
|
57 |
+
|
58 |
+
# Final check for short text
|
59 |
+
if text_tokens < 0.05 * TOKEN_LIMIT:
|
60 |
+
return "Error: Text too small to summarize."
|
61 |
+
|
62 |
+
return input_text
|
63 |
+
|
64 |
+
# Gradio button to set max/min lengths
|
65 |
+
user_lengths = {"min_length": None, "max_length": None}
|
66 |
+
|
67 |
+
def set_max_min_lengths(latest_text):
|
68 |
+
tokens = tokenize_text(latest_text)
|
69 |
+
text_tokens = len(tokens)
|
70 |
+
max_range = int(0.08 * text_tokens)
|
71 |
+
min_range = int(0.02 * text_tokens)
|
72 |
+
return f"Set max_length between {min_range} and {max_range} tokens."
|
73 |
+
|
74 |
+
def validate_lengths(min_length, max_length, text):
|
75 |
+
if not (isinstance(min_length, int) and isinstance(max_length, int)):
|
76 |
+
return "Error: Length values must be integers."
|
77 |
+
if min_length <= 0 or max_length <= 0 or min_length >= max_length:
|
78 |
+
return "Error: Invalid length values. Ensure min_length < max_length and both are positive."
|
79 |
+
user_lengths["min_length"] = min_length
|
80 |
+
user_lengths["max_length"] = max_length
|
81 |
+
return "Done"
|
82 |
+
|
83 |
+
# Function to summarize final text
|
84 |
+
def summarize_text(input_text):
|
85 |
+
if user_lengths["min_length"] is None or user_lengths["max_length"] is None:
|
86 |
+
return "Error: Please set valid max and min lengths first."
|
87 |
summary_output = text_summary(
|
88 |
input_text,
|
89 |
+
min_length=user_lengths["min_length"],
|
90 |
+
max_length=user_lengths["max_length"]
|
91 |
)
|
92 |
return summary_output[0]['summary_text']
|
93 |
|
94 |
+
# Close any existing Gradio interfaces
|
95 |
+
gr.close_all()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
96 |
|
97 |
# Create Gradio interface
|
98 |
with gr.Blocks() as demo:
|
99 |
+
text_input = gr.Textbox(label="Input Text", lines=10)
|
100 |
+
analyze_button = gr.Button("Analyze Text")
|
101 |
+
analysis_output = gr.Textbox(label="Analysis Result")
|
|
|
102 |
|
103 |
+
max_length_input = gr.Number(label="Max Length")
|
104 |
+
min_length_input = gr.Number(label="Min Length")
|
105 |
+
set_lengths_button = gr.Button("Set Max and Min Lengths")
|
106 |
+
length_result = gr.Textbox(label="Length Validation Result")
|
107 |
|
|
|
108 |
summarize_button = gr.Button("Summarize")
|
|
|
|
|
109 |
summary_output = gr.Textbox(label="Summary Output", lines=10)
|
110 |
|
111 |
+
def analyze_and_set_text(input_text):
|
112 |
+
latest_text = process_batches(input_text)
|
113 |
+
if latest_text.startswith("Error"):
|
114 |
+
return latest_text, ""
|
115 |
+
return latest_text, set_max_min_lengths(latest_text)
|
116 |
+
|
117 |
+
analyze_button.click(analyze_and_set_text, inputs=text_input, outputs=[analysis_output, length_result])
|
118 |
+
set_lengths_button.click(
|
119 |
+
validate_lengths,
|
120 |
+
inputs=[min_length_input, max_length_input, analysis_output],
|
121 |
+
outputs=length_result
|
|
|
|
|
|
|
|
|
|
|
122 |
)
|
123 |
+
summarize_button.click(summarize_text, inputs=analysis_output, outputs=summary_output)
|
124 |
|
125 |
# Launch the Gradio app
|
126 |
demo.launch(share=False)
|