本文共 3127 字,大约阅读时间需要 10 分钟。
hadoop版本:2.6.5master:192.168.0.160slave1:192.168.0.161hbase版本:1.2.6
计数器,见名知意它的作用就是计数,一般用于实时统计。比如广告点击量等。
Hbase中也提供了计数器的功能。它能高效的应对高并发的场景。同时保证原子性。 Hbase也有另外一种机制可以将列当作计数器。否则,如果用户需要对一行数据加锁,然后读取数据,再对当前数据做加法,最后回写Hbase并释放该锁。这样做可能做会导致大量的资源竞争问题。创建表
hbase(main):005:0> create 'ns2:t1', 'f1'0 row(s) in 1.3690 seconds=> Hbase::Table - ns2:t1
增加计数
hbase(main):010:0> incr 'ns2:t1', '0001', 'f1:count'COUNTER VALUE = 10 row(s) in 0.1770 seconds
获取计数值
hbase(main):002:0> get_counter 'ns2:t1', '0001', 'f1:count'COUNTER VALUE = 1
package com.tongfang.learn.hbase;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.TableName;import org.apache.hadoop.hbase.client.*;import org.apache.hadoop.hbase.util.Bytes;import org.junit.After;import org.junit.Before;import org.junit.Test;import java.io.IOException;public class HbaseCounterTest { private final static String TABLE_NAME= "ns2:t1"; private final static String COLUMN_FAMILY = "f1"; private final static String ROW_NAME = "0001"; private final static String COLUMN1 = "counter1"; private final static String COLUMN2 = "counter2"; private Connection conn; @Before public void iniConn() throws Exception { Configuration conf = HBaseConfiguration.create(); conf.set("hbase.zookeeper.quorum", "192.168.0.161"); conf.set("hbase.zookeeper.property.clientPort","2181"); conn = ConnectionFactory.createConnection(conf); } /** * 单计数器 * @throws IOException */ @Test public void singleCounterTest() throws IOException { Table t = conn.getTable(TableName.valueOf(TABLE_NAME)); long counter1 = t.incrementColumnValue(Bytes.toBytes(ROW_NAME), Bytes.toBytes(COLUMN_FAMILY), Bytes.toBytes(COLUMN1), 1); // 一次增加1 System.out.println("counter1 = " + counter1); } /** * 多计数器 * @throws IOException */ @Test public void MulticounterTest() throws IOException { Table t = conn.getTable(TableName.valueOf(TABLE_NAME)); Increment incr = new Increment(Bytes.toBytes(ROW_NAME)); incr.addColumn(Bytes.toBytes(COLUMN_FAMILY), Bytes.toBytes(COLUMN1), 1); incr.addColumn(Bytes.toBytes(COLUMN_FAMILY), Bytes.toBytes(COLUMN2), 2); t.increment(incr); } @After public void closeConn() throws IOException { conn.close(); }}
注意:程序运行前,需要在client主机的hosts文件中指定master和slave1的ip地址,具体根据hadoop集群配置的主机名设置。
C:\Windows\System32\drivers\etc\hosts192.168.0.160 master192.168.0.161 slave1
运行结果:
hbase(main):024:0> scan 'ns2:t1'ROW COLUMN+CELL 0001 column=f1:counter1, timestamp=1532420447441, value=\x00\x00\x00\x00\x00\x0 0\x00\x06 0001 column=f1:counter2, timestamp=1532420447441, value=\x00\x00\x00\x00\x00\x0 0\x00\x06 1 row(s) in 0.0310 seconds
转载地址:http://izjmb.baihongyu.com/