Big information is pool of huge and complicated information sets so it becomes tough to method information exploitation management tools. The term ‘Big Data’ illustrates innovative method and knowledge to capture, store, distribute, handle and evaluate petabyte or larger-sized datasets with high-speed and totally different structures. Huge knowledge may be structured, unstructured or semi-structured, leading to incapability of standard knowledge management ways. With the quick evolution of information, information storage and networking assortment capability, massive information area unit quickly growing altogether science and engineering domains. Knowledge is generated from numerous totally different sources and might arrive within the system at numerous rates. So as to method these giant amounts of information in a cheap and economical approach, similarity are employed. Huge knowledge may be knowledge whose scale, diversity, and quality need new design, techniques, algorithms, and analytics to manage it and extract price and hidden information from it. The analysis of huge information typically tough because it often involves assortment of mixed information supported completely different patterns or rules. The challenges embrace capture, storage, search, sharing, analysis, and visualization. The trend to massive information sets is owing to the additional info drawn from analysis of one large set of connected information, compared to separate smaller sets with constant total quantity of information. Massive data processing is that the ability of extracting helpful info from streams of information or datasets, that owing to its rate, variability and volume. This paper argues applications of huge processing model and conjointly massive data processing. Hadoop is that the core platform for structuring huge knowledge, and solves the matter of constructing it helpful for analytics functions. Hadoop is Associate in nursing open supply software system project that permits the distributed process of huge knowledge sets across clusters of goods servers. It’s designed to rescale from one server to thousands of machines, with a awfully high degree of fault tolerance.
목차
Abstract 1. Introduction 2. Big Data 2.1 Problem with Huge Processing 3. Hadoop 3.1 HDFS Design 3.2 Map Reduce Design 4. Big Data Architecture 5. Big Data Analytics 5.1 The Rise of the Cloud 5.2 The Global Introduction of Nosql Databases 5.3 Hadoop, the Open Source Heart of Big Data Analytics 6. Data Mining for Big Data 7. Big Data Challenges 8. Other Elements of Hadoop 9. Conclusion References
키워드
Big DataHadoopMap ReduceHDFSHadoop ComponentsData MiningHadoopArchitecture
저자
Ankit Jain [ Department of Computer Science and Engineering VIT University, Prof. in Department of Computer Science and Engineering VIT University, Chennai, Chennai, Tamilnadu, India, Tamilnadu, India ]
Subbulakshmi T. [ Department of Computer Science and Engineering VIT University, Prof. in Department of Computer Science and Engineering VIT University, Chennai, Chennai, Tamilnadu, India, Tamilnadu, India ]
보안공학연구지원센터(IJDTA) [Science & Engineering Research Support Center, Republic of Korea(IJDTA)]
설립연도
2006
분야
공학>컴퓨터학
소개
1. 보안공학에 대한 각종 조사 및 연구
2. 보안공학에 대한 응용기술 연구 및 발표
3. 보안공학에 관한 각종 학술 발표회 및 전시회 개최
4. 보안공학 기술의 상호 협조 및 정보교환
5. 보안공학에 관한 표준화 사업 및 규격의 제정
6. 보안공학에 관한 산학연 협동의 증진
7. 국제적 학술 교류 및 기술 협력
8. 보안공학에 관한 논문지 발간
9. 기타 본 회 목적 달성에 필요한 사업
간행물
간행물명
International Journal of Database Theory and Application
간기
격월간
pISSN
2005-4270
수록기간
2008~2016
십진분류
KDC 505DDC 605
이 권호 내 다른 논문 / International Journal of Database Theory and Application Vol.9 No.5