IJRCS – Volume 5 Issue 4 Paper 7


Author’s Name : Savita Gunjal

Volume 05 Issue 04  Year 2018  ISSN No:  2349-3828  Page no: 18-21



Today’s age is of smart life and smart phone becomes one of important necessity of smart life. As we use different software’s in personal computer system for smart work, similarly smart phone needs various mobile applications. These various applications installed in smart phone make life easier. To download various mobile application smart phone user has to visit respective play store such as Google Play Store, Apples store etc. as per operating system support of smart phone. When user visit play store then he/she is able to see the various application lists. This list is built on the basis of promotion or advertisement. User doesn’t have knowledge about the application (i.e. which applications are useful or useless). So user looks at the list and downloads the applications. But sometimes it happens that the downloaded application won’t work or not useful. That means it is fraud in mobile application list. To avoid this fraud, we are making application in which we are going to list the applications. To list the application first we are going to find the active period of the application named as leading session. We are also investing the three types of evidences: Ranking based evidence, Rating based evidence and Review based evidence. Using these three evidences finally we are calculating aggregation.


Mobile Apps, Ranking Fraud, Ranking Based, Rating Based, Review Based, Evidences Aggregating Function


  1. H. Zhu, H. Xiong, Y. Ge, and E. Chen, “Ranking fraud detection for mobile apps: A holistic view,” in Conference on Information and Knowledge Management (CIKM), 2013, pp. 619–628.
  2. R. Chandy and H. Gu, “Identifying spam in the iOS app store,” in Joint WICOW/AIRWeb Workshop on Web Quality, 2012, pp. 56–59.
  3. B. Fu, J. Lin, L. Li, C. Faloutsos, J. Hong, and N. Sadeh, “Why people hate your app: making sense of user feedback in a mobile app store,” in International Conference on Knowledge Discovery and Data Mining (SIGKDD), 2013, pp. 1276–1284.
  4. M. Liu, C. Wu, X.-N. Zhao, C.-Y. Lin, and X.-L. Wang, “App relationship calculation: An iterative process,” IEEE Transactions on Knowledge and Data Engineering (IEEE TKDE), vol. 27, no. 8, pp. 2049–2063, 2015.
  5. Y. Zhou, H. Yu, and X. Cai, “A novel k-means algorithm for clustering and outlier detection,” in IEEE International Conference on Future Information Technology and Management Engineering, 2009, pp. 476 – 480.
  6. K.-A. Yoon, O.-S. Kwon, and D.-H. Bae, “An approach to outlier detection of software measurement data using the k-means clustering method,” in International Symposium on Empirical Software Engineering and Measurement, 2007.
  7. B. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt, “Support vector method for novelty detection,” in Advances in Neural Information Processing Systems (NIPS), 2000, pp. 582 – 588.
  8. A. Gorla, I. Tavecchia, F. Gross, and A. Zeller, “Checking app behavior against app descriptions,” in International Conference on Software Engineering (ICSE), 2014, pp. 1025–1035.
  9. X. Wei, L. Gomez, I. Neamtiu, and M. Faloutsos, “Profiledroid: multi-layer profiling of Android applications,” in International Conference on Mobile Computing and Networking (MobiCom), 2012, pp. 137–148.
  10. B. Sanz, I. Santos, C. Laorden, X. Ugarte-Pedrero, and P. G. Bringas, “On the automatic categorisation of android applications,” in IEEE Consumer Communications & Networking Conference (CCNC), 2012, pp. 149–153.
  11. S. Ma, S. Wang, D. Lo, R. H. Deng, and C. Sun, “Active semi supervised approach for checking app behavior against its description,” in IEEE 39th Annual International Computers, Software and Applications (COMPSAC), vol. 2. IEEE, 2015, pp. 179–184.
  12. A. Shabtai, Y. Fledel, and Y. Elovici, “Automated static code analysis for classifying Android applications using machine learning,” in Computational Intelligence Society (CIS). IEEE Computer Society, 2010, pp. 329–333.
  13. G. Berardi, A. Esuli, T. Fagni, and F. Sebastiani, “Multi-store metadata-based supervised mobile app classification,” in Proceedings of the 30th Annual ACM Symposium on Applied Computing (SAC). ACM, 2015, pp. 585–588.
  14. D. M. Blei, A. Y. Ng, M. I. Jordan, and J. Lafferty, “Latent Dirichlet Allocation,” Journal of Machine Learning Research (JMLR), vol. 3, pp. 993–1022, 2003.
  15. A. Ahmed, Y. Low, M. Aly, V. Josifovski, and A. J. Smola, “Scalable distributed inference of dynamic user interests for behavioral targeting,” in International Conference on Knowledge Discovery and Data Mining (SIGKDD), 2011.
  16. D. Ramage, D. Hall, R. Nallapati, and C. D. Manning, “Labeled LDA: a supervised topic model for credit attribution in multilabeled corpora,” in Conference on Empirical Methods in Natural Language Processing, 2009, pp. 248 – 256.
  17. A. Banerjee, I. Dhillon, J. Ghosh, and S. Sra, “Clustering on the unit hypersphere using von Mises-Fisher distributions,” Journal of Machine Learning Research (JMLR), vol. 6, pp. 1345–1382, 2005.
  18. D. Surian and S. Chawla, “Mining outlier participants: Insights using directional distributions in latent models,” in The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD) (3), 2013, pp. 337–352.
  19. C. C. Aggarwal, S. C. Gates, and P. S. Yu, “On the merits of building categorization systems by supervised clustering,” in International Conference on Knowledge Discovery and Data Mining (SIGKDD), 1999, pp. 352–356.
  20. X. Cao, G. Cong, B. Cui, C. S. Jensen, and Q. Yuan, “Approaches to exploring category information for question retrieval in community question-answer archives,” ACM Transactions on Information Systems (TOIS), vol. 30, pp. 7.1–7.38, 2012