1. Java自带的Serialize
依赖jar包:无
代码示意:
import java.io.{ByteArrayInputStream, ByteArrayOutputStream, ObjectInputStream, ObjectOutputStream} object JavaSerialize { def serialize(obj: Object): Array[Byte] = { var oos: ObjectOutputStream = null var baos: ByteArrayOutputStream = null try { baos = new ByteArrayOutputStream() oos = new ObjectOutputStream(baos) oos.writeObject(obj) baos.toByteArray() }catch { case e: Exception => println(e.getLocalizedMessage + e.getStackTraceString) null } } def deserialize(bytes: Array[Byte]): Object = { var bais: ByteArrayInputStream = null try { bais = new ByteArrayInputStream(bytes) val ois = new ObjectInputStream(bais) ois.readObject() }catch { case e: Exception => println(e.getLocalizedMessage + e.getStackTraceString) null } } }
2. Jackson序列化方式
依赖jar包:json4s-jackson_2.10-3.2.11.jar、jackson-annotations-2.3.0.jar、jackson-core-2.3.1.jar、jackson-databind-2.3.1.jar(均可在maven上下载)
代码示意:
import org.json4s.NoTypeHints import org.json4s.jackson.Serialization import org.json4s.jackson.Serialization._ object JacksonSerialize { def serialize[T <: Serializable with AnyRef : Manifest](obj: T): String = { implicit val formats = Serialization.formats(NoTypeHints) write(obj) } def deserialize[T: Manifest](objStr: String): T = { implicit val formats = Serialization.formats(NoTypeHints) read[T](objStr) } }
代码也是非常简单,好处是序列化后的结果是以json格式显示,可以直接阅读,更人性化,但是缺点是序列化耗时较久,并且序列化后大小也不小
3. Avro序列化方式
依赖jar包:avro-tools-1.7.7.jar(用于编译生成类)、avro-1.7.7.jar
第一步:定义数据结构scheme文件user.avsc,如下
{"namespace": "example.avro", "type": "record", "name": "User", "fields": [ {"name": "name", "type": "string"}, {"name": "favorite_number", "type": ["int", "null"]}, {"name": "favorite_color", "type": ["string", "null"]} ] }
第二步:通过工具生成类
(1)将avro-tools-1.7.7.jar 包和user.avsc 放置在同一个路径下 (2)执行 java -jar avro-tools-1.7.7.jar compile schema user.avsc java. (3)会在当前目录下,自动生成User.java文件,然后在代码中引用此类
第三步:代码示意
import java.io.ByteArrayOutputStream import example.avro.User import org.apache.avro.file.{DataFileReader, DataFileWriter} import org.apache.avro.io.{DecoderFactory, EncoderFactory} import org.apache.avro.specific.{SpecificDatumReader, SpecificDatumWriter} object AvroSerialize { //将序列化的结果返回为字节数组 def serialize(user: User): Array[Byte] ={ val bos = new ByteArrayOutputStream() val writer = new SpecificDatumWriter[User](User.getClassSchema) val encoder = EncoderFactory.get().binaryEncoder(bos, null) writer.write(user, encoder) encoder.flush() bos.close() bos.toByteArray } //将序列化后的字节数组反序列化为对象 def deserialize(bytes: Array[Byte]): Any = { val reader = new SpecificDatumReader[User](User.getClassSchema) val decoder = DecoderFactory.get().binaryDecoder(bytes, null) var user: User = null user = reader.read(null, decoder) user } //将序列化的结果存入到文件 def serialize(user: User, path: String): Unit ={ val userDatumWriter = new SpecificDatumWriter[User](User.getClassSchema) val dataFileWriter = new DataFileWriter[User](userDatumWriter) dataFileWriter.create(user.getSchema(), new java.io.File(path)) dataFileWriter.append(user) dataFileWriter.close() } //从文件中反序列化为对象 def deserialize(path: String): List[User] = { val reader = new SpecificDatumReader[User](User.getClassSchema) val dataFileReader = new DataFileReader[User](new java.io.File(path), reader) var users: List[User] = List[User]() while (dataFileReader.hasNext()) { users :+= dataFileReader.next() } users } }
这里提供了两种方式,一种是通过二进制,另一种是通过文件。方法相对上面两种有点复杂,在hadoop RPC中使用了这种序列化方式
4. Kryo序列化方式
依赖jar包:kryo-4.0.0.jar、minlog-1.2.jar、objenesis-2.6.jar、commons-codec-1.8.jar
代码示意:
import java.io.{ByteArrayOutputStream} import com.esotericsoftware.kryo.{Kryo} import com.esotericsoftware.kryo.io.{Input, Output} import com.esotericsoftware.kryo.serializers.JavaSerializer import org.objenesis.strategy.StdInstantiatorStrategy object KryoSerialize { val kryo = new ThreadLocal[Kryo]() { override def initialValue(): Kryo = { val kryoInstance = new Kryo() kryoInstance.setReferences(false) kryoInstance.setRegistrationRequired(false) kryoInstance.setInstantiatorStrategy(new StdInstantiatorStrategy()) kryoInstance.register(classOf[Serializable], new JavaSerializer()) kryoInstance } } def serialize[T <: Serializable with AnyRef : Manifest](t: T): Array[Byte] = { val baos = new ByteArrayOutputStream() val output = new Output(baos) output.clear() try { kryo.get().writeClassAndObject(output, t) } catch { case e: Exception => e.printStackTrace() } finally { } output.toBytes } def deserialize[T <: Serializable with AnyRef : Manifest](bytes: Array[Byte]): T = { val input = new Input() try { input.setBuffer(bytes) kryo.get().readClassAndObject(input).asInstanceOf[T] } finally { } } }
这种方式经过我本地测试,速度是最快的,关键是做好对kryo对象的复用,因为大量创建会非常耗时,在这里要处理好多线程情况下对kryo对象的使用,spark中也会使用到kryo
其实还有其他的序列化方式,比如protobuf、thrify,操作上也有一定复杂性,由于环境问题暂时未搞定,搞定了再发出来。
来源:https://blog.csdn.net/u013597009/article/details/78538018