Content
Distilbert: A Smaller, Faster, and Distilled BERT   - Zilliz Learn
distilbert
bert
Distilbert: A Smaller, Faster, and Distilled BERT   - Zilliz Learn

Distilbert: A Smaller, Faster, and Distilled BERT - Zilliz Learn

DistilBERT maintains 97% of BERT's language understanding capabilities while being 40% small and 60% faster.

Comparing image captioning tasks in LLM with Gemini Vision Pro, BILP, BILP2, and LLAVA
llava
vlm
llm
imagecaptioning
multimodal
Comparing image captioning tasks in LLM with Gemini Vision Pro, BILP, BILP2, and LLAVA

Comparing image captioning tasks in LLM with Gemini Vision Pro, BILP, BILP2, and LLAVA

Comparing image captioning tasks in LLM with Gemini Vision Pro, BILP, BILP2, and LLAVA

The differences between BERT and mBERT
bert
transformers
multilingualbert
multilingualembedding
The differences between BERT and mBERT

The differences between BERT and mBERT

The main differences between BERT (Bidirectional Encoder Representations from Transformers) and mBERT (Multilingual BERT) lie in their…

Prompt Engineer
promptengineer
largelanguagemodels
llm
Prompt Engineer

Prompt Engineer

Prompt Engineering Terms Explained

Work History

Senior Data Analyst

National News Bureau of Thailand

Sep 2016 -  Present

Bangkok, Thailand

-Collaborated on the development of basic to moderately complex SQL queries for campaign logic and reporting, optimizing data retrieval processes by 30%. -Utilized software tools, websites, and open-source research methods to identify relevant news coverage for government interests, improving news tracking efficiency by 25%. -Delivered high-quality daily news reports, Monday through Friday, maintaining a 95% on-time delivery rate. -Analyzed engagement data from News, Public Policy, and Healthcare publishers, identifying trends that led to a 15% increase in content interaction over six months. -Worked cross-functionally with Communications teams to share data insights with internal and external stakeholders, enhancing strategic alignment. -Troubleshoot data processes and values, partnering with IT to resolve system acceptance issues, achieving a 98% resolution rate on first-time troubleshooting efforts.

Ph.D. Research Scholar

Institute of Informatics and communication. University of Delhi

Jan 2022 -  Mar 2025

Delhi, India.

Research Works -Cross-Lingual Depression Identification Developed a multi-head attention network for explainable depression identification in Thai. -Psychological Well-Being Indicators Identified key indicators of psychological well-being through machine learning and explainable AI in Reddit interactions. -Thai Language Depression Detection Utilized attentive network models to detect depression in Thai posts, providing deeper mental health insights. -LGBT Cyberbullying Detection Applied transformer algorithms to identify cyberbullying in Thai content, specifically within the LGBT community. -SEO Transformation with Generative AI Explored the challenges and opportunities associated with transforming SEO in the age of generative AI.

AI for cybersecurity Research Intern

International Center for AI and Cyber Security Research and Innovations

Oct 2023 -  Dec 2023

Taiwan

Research Works : 1.Mutual Information-Based Logistic Regression for Phishing URL Detection: Developed a logistic regression model utilizing mutual information metrics to effectively identify phishing URLs, enhancing cybersecurity measures. Technologies Used: Python, scikit-learn, pandas, NumPy. 2.Thai-Language Chatbot Security: Implemented a detection system for instruction attacks in Thai-language chatbots using XLM-RoBERTa for text embeddings and Bi-GRU for sequence modeling, improving chatbot resilience against adversarial threats. Technologies Used: Python, XLM-RoBERTa, Bi-GRU, TensorFlow, Hugging Face Transformers, LLM Security. 3.Adversarial Learning for Mirai Botnet Detection: Leveraged Long Short-Term Memory (LSTM) networks combined with XGBoost to create an adversarial learning framework for detecting Mirai botnet activities, enhancing network security. Technologies Used: Python, LSTM, XGBoost, Keras, TensorFlow. 4.Speaker Recognition in Kannada Language: Adopted Mel-Frequency Cepstral Coefficients (MFCCs) integrated with a hybrid model of Random Forest and Multi-Layer Perceptron (MLP) for robust speaker recognition in the Kannada language, facilitating advancements in voice-based applications. Technologies Used: Python, MFCC, Random Forest, Multi-Layer Perceptron, scikit-learn, librosa.

Talks at Events Attended Events (5) Communities (11)

GDG New Delhi

55266 members

GDG Cloud New Delhi

48625 members

WTM Delhi

18987 members

Women Who Code Delhi

5918 members

AI Community Delhi

4196 members

GDG Noida

37122 members

View More (5)
BadgesData Analyst Expert

Cookies

This website uses cookies to improve your online experience. By continuing to use this website, you agree to our use of cookies. If you would like to, you can change your cookie settings at any time. Our Privacy Notice provides more information about what cookies we use.