Input: Training set T = {(U1, W1, V1), (U2, W2, V2), … ….,(Un, Wn, Vn)}; 2 classifiers k1, and k2; a testing object M = (u, w);
Output: The label of M;
1: Train:
2: create lexical feature training set T1 = {(U1, V1), (U2, V2), …,(Un, Vn)};
3: train k1 on T1;
4: for i = 1to n do
5: apply k1 on Ui to get C;
6: End for
7: create fusion feature training set T2 = {(C1, W1, V1), (C2, W2, V2), …, (Cn, Wn, Vn)};
8: train K2 on T2;
9: Test:
10: apply k1 over M = (U) to obtain its label CU;
11: apply k2 over M1 = (CU, w) to obtain its label V;
12: Return V;
To the best of our knowledge, this is the first study on sarcasm identification that proposes a novel feature extraction algorithm and two stages classification algorithm for sarcasm identification by considering the lexical feature in the first stage and fused features in the second stage. The impact of ‘Algorithm 1 and Algorithm 2’ are as follows
The feature extraction algorithm provides the stepwise representation for extracting the proposed multi-feature in the form of a feature vector, which is employed as an input to the machine learning algorithm.
The pre-processing segments of the feature extraction algorithm help in the data preparation stage, such as tokenization, POS tagging, and data normalization, such as stemming and lemmatization, before the actual feature extraction process.
Both algorithms do not depend on any programming language; thus anyone without programming knowledge can easily understand it.
Th two-stages classification algorithm helps describe the stepwise procedure for classifying tweets into sarcastic or non-sarcastic to obtain the predictive performance, which makes it simple to comprehend.
Do you have any questions about this protocol?
Post your question to gather feedback from the community. We will also invite the authors of this article to respond.
Tips for asking effective questions
+ Description
Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.