跳至主要内容

博文

目前显示的是 2018的博文

Deep in Transaction (3)

Deep in Transaction (3) In last blog, we talk about four kinds of problems when we lack of a reasonable implementation of transaction: Lost Update; Inconsistent Retrieval; Dirty Read; Premature Write; Today, we focus on how we make it. How The first tool we can come up with when dealing with concurrent access problem is locking. Besides it, we will also see how optimistic concurrency control and timestamp order make it. Lock The basic idea of lock is very simple: when multiple clients want to access same object, only one client will get the lock and continue, other clients will wait until lock is free – Serialization between multiple clients. In the context of Transaction , lock is relative complex. Two Phase Locking We have introduced the conflict operations between transactions, and in order to solve it we can make transaction serially equivalent (or totally serialization). In details, Serial Equivalence requires that all of a transaction’s ...

Deploy to Maven Central

Deploy to Maven Central Recently, we need to deploy some jar to maven for open source reason, so we through some trial and errors finally make it. This blog is for future reader for review. Registration Introduction to Sonatype OSSRH Sonatype OSSRH uses Nexus to provide deployment service for open source project, and this repo is called maven central, in which OSSRH all us to submit and download binary jar. Registration Create an account of JIRA Create a new issue about your deploy request Notice : Only when this issue’s state change to RESOLVED , we can start to deploy jar; If we deploy on behalf of yourself, we should be careful about groupId : e.g. our project is put on github called syncer , so groupId can be com.github.zzt93 ; Update POM After the registration and issue resolved, we need to add more info in pom to do the deployment: name , description , url , groupId , artifactId , version , license , developers , scm < name > ${projec...

微信支付的坑(2)

微信支付的坑(2) 在 上一篇支付相关的博客中 ,我们介绍了接入支付宝和微信支付的两个大坑,一个是沙箱测试环境的搭建,第二个是回调接口的编写。在这篇中,我们将继续深入。 如果你需要接入微信手机端的网页支付的话,光接入“H5支付”产品是不够的,因为H5支付只能用于微信外的手机网页支付,如果想要在微信内扫一扫,打开页面然后支付,需要使用“公众号支付”产品。而“公众号支付”产品要比H5支付复杂许多。 首先,在H5支付下单后,后端只要返回一个带有回调的链接就可以实现唤起微信支付,并且在支付完成后回到相应的页面。而公众号支付支付需要前端拿到后端的返回值(prepare_id)之后,再进行一次带有签名的请求。由于不可能将秘钥等敏感数据传递或放置于前端,所以需要后端进行 请求的签名 ,然后将所有参数发送到前端,由前端请求。 而且,在初识请求后端生成prepare_id到时候,需要一个不同的参数,即openid。openid是一个用户的标识,但是这个标识不是一个全局的userid,而是一个用户相对于某个微信应用(appid)生成的id。为了获取openid需要使用微信的 oauth鉴权接口 ,不过在进行开发之前,需要进行一系列的配置(不得不说,微信的配置要比支付宝繁杂很多。并且就接口友好性、SDK支持,社区和官方帮助等方面的对比,支付宝的开放性,要比微信好很多) 设置支付目录 在微信商户平台( pay.weixin.qq.com )设置您的公众号支付支付目录,设置路径:商户平台–>产品中心–>开发配置,如图7.7所示。公众号支付在请求支付的时候会校验请求来源是否有在商户平台做了配置,所以必须确保支付目录已经正确的被配置,否则将验证失败,请求支付不成功。 现在已经支持配置根目录,配置后有一定的生效时间,一般5分钟内生效 需要注意的是公众平台有时候设置的是域名(url),不能加协议,而有时候填写的是链接,要加协议(http或https) 设置授权回调域名 除了设置支付目录,为了获取openid,需要设置授权回调域名( 公众平台 ->设置->公众号设置->网页授权域名): 开发公众号支付时,在统一下单接口中要求必传用户openid,而获取openid则需要您在公众平台设置获取openid的域...

Spark Application Setup Pitfalls

Spark Application Setup Pitfalls This is a practical blog that records the pitfalls that I met to install and config the Spark environment. Before we start, we supposed that you have installed the Java and downloaded Hadoop/Spark . Because Spark rely on Hadoop’s some functionality, like HDFS or yarn, in most cases, we also install & config Hadoop. When downloading, we need to make sure the version of Spark match the Hadoop (More than that, in our cases, we have a Java application which holds a SparkContext and act as a driver program , so we need to make sure the version of spark-core & spark-sql etc use the same Spark & Hadoop as the Spark cluster). Hadoop Overview After extract the hadoop-2.6.5.tar.gz file, we can change directory into the config directory ./etc/hadoop . There exists two kinds of configurations: Read-only default configuration - core-default.xml, hdfs-default.xml, yarn-default.xml and mapred-default....

Common Grok Rules

Common Grok Rules In this blog, we are going to share some common grok patterns used in Logstash to parse logs from: Nginx (access log & error log with Naxsi ), ufw . Nginx access.log Example: 192.168.1.1 - - [30/Oct/2018:09:38:28 +0800] "GET /question?id=yyy HTTP/1.1" 200 808 "https://xxx.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" Grok rules: %{IPORHOST:clientip} (?:-|(%{WORD}.%{WORD})) %{USER:ident} \[%{HTTPDATE:timestamp}\] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-) "%{GREEDYDATA:referrer}" "%{GREEDYDATA:agent}"( "%{GREEDYDATA:forwarder}")? error.log Examples: 2018/10/30 16:35:19 [error] 39685#0: *117137 NAXSI_FMT: ip=192.168.1.1&server=api.xxx&uri=/material&learning=1&vers=0.56&total_proc...

LevelDB Source Reading (1): Structure

LevelDB Source Reading (1): Structure LevelDB “is an open source on-disk key-value store.” After I read some documents, I have some basic understanding of LevelDB. So I come up with some questions about structure of LevelDB to answer when reading the source code. Structure Log File: repair/recover db A log file (*.log) stores a sequence of recent updates. Each update is appended to the current log file. The log file contents are a sequence of 32KB blocks. The only exception is that the tail of the file may contain a partial block. Block format: Each block consists of a sequence of records: block := record* trailer? record := checksum: uint32 // crc32c of type and data[] ; little-endian length: uint16 // little-endian type: uint8 // One of FULL, FIRST, MIDDLE, LAST data: uint8[length] // data is LengthPrefixedSlice with type from batch data definition in Block : data: also named `writeBatch` in levelDB // WriteBatch header has an 8-byte ...