Currently, I am using SQL Server 2016 with JSON and I want to join collections together. So far I created two collections:
目前,我正在使用带有JSON的SQL Server 2016,我希望将集合加入到一起。到目前为止我创建了两个集合:
CREATE TABLE collect_person(person...)
CREATE TABLE collect_address(address...)
The JSON document will look like this in the first collection (collection_person):
JSON文档在第一个集合(collection_person)中将如下所示:
{
"id" : "P1",
"name" : "Sarah",
"addresses" : {
"addressId" : [
"ADD1",
"ADD2"
]
}
}
The JSON documents will look like these below in the second collection (collect_address):
JSON文档在第二个集合(collect_address)中如下所示:
{
"id" : "ADD1",
"city" : "San Jose",
"state" : "CA"
}
{
"id" : "ADD2",
"city" : "Las Vegas"
"state" : "NV"
}
I want to get the addresses of the person named "Sarah", so the output will be something like:
我想获取名为“Sarah”的人的地址,所以输出将是这样的:
{
{"city" : "San Jose", "state" : "CA"},
{"city" : "Las Vegas", "state" : "NV"}
}
I do not want to convert JSON to SQL and SQL to JSON. Is this possible to do in SQL Server 2016 with JSON and please show me how? Thank you in advance.
我不想将JSON转换为SQL和SQL转换为JSON。这可以在带有JSON的SQL Server 2016中进行,请告诉我如何操作?先谢谢你。
2 个解决方案
#1
0
Never actually used JSON before but I wasn't aware that you could join collections using it. JSON was designed for data exchange to/from a server and acts as a transport method.
之前从未实际使用过JSON,但我不知道您可以使用它加入集合。 JSON设计用于与服务器之间的数据交换,并充当传输方法。
#2
0
I am a little late to the question, but it can be done via cross apply and I also used common table expressions. Depending on the table size I would suggest creating a persisted computed column on the id fields for each table assuming that the data won't change and that there was a single addressId per record or add some other key value that can be indexed on and used to limit the records that need to be converted to JSON. This is a simple example and it hasn't been tested for performance so "YMMV".
我对这个问题有点迟,但可以通过交叉应用来完成,我也使用了常用的表表达式。根据表的大小,我建议在每个表的id字段上创建一个持久计算列,假设数据不会改变,并且每个记录有一个addressId或添加一些可以索引和使用的其他键值限制需要转换为JSON的记录。这是一个简单的例子,它没有经过性能测试,因此“YMMV”。
Building Example Tables
构建示例表
DECLARE @collect_person AS TABLE
(Person NVARCHAR(MAX))
DECLARE @collect_address as TABLE
([Address] NVARCHAR(MAX))
INSERT INTO @collect_person (Person)
SELECT N'{
"id" : "P1",
"name" : "Sarah",
"addresses" : {
"addressId" : [
"ADD1",
"ADD2"
]
}
}'
INSERT INTO @collect_address ([Address])
VALUES
(N'{
"id" : "ADD1",
"city" : "San Jose",
"state" : "CA"
}')
,('{
"id" : "ADD2",
"city" : "Las Vegas",
"state" : "NV"
}')
Querying the Tables
查询表格
;WITH persons AS (
SELECT --JP.*
JP.id
,JP.name
,JPA.addressId -- Or remove the with clause for JPA and just use JPA.value as addressId
FROM @collect_person
CROSS APPLY OPENJSON([person])
WITH (
id varchar(50) '$.id'
,[name] varchar(50) '$.name'
,addresses nvarchar(max) AS JSON
) as JP
CROSS APPLY OPENJSON(JP.addresses, '$.addressId')
WITH (
addressId varchar(250) '$'
) AS JPA
)
,Addresses AS (
SELECT A.*
FROM @collect_address AS CA
CROSS APPLY OPENJSON([Address])
WITH (
id varchar(50) '$.id'
,city varchar(50) '$.city'
,state varchar(2) '$.state'
) as A
)
SELECT * FROM persons
INNER JOIN Addresses
ON persons.addressId = Addresses.id
Again this is not the ideal way to do this, but it will work and as stated before you should probably have a key field on each table that is indexed to limit the scans and JSON Parsing done on the table.
同样,这不是理想的方法,但它可以正常工作,如前所述,您应该在每个表上都有一个关键字段,该字段被编入索引以限制扫描和在表上完成的JSON解析。
There is native compilation, but it is new to me and I am not familiar with the pros and cons.
有本地编译,但它对我来说是新的,我不熟悉利弊。
Optimize JSON processing with in-memory OLTP
使用内存中的OLTP优化JSON处理
#1
0
Never actually used JSON before but I wasn't aware that you could join collections using it. JSON was designed for data exchange to/from a server and acts as a transport method.
之前从未实际使用过JSON,但我不知道您可以使用它加入集合。 JSON设计用于与服务器之间的数据交换,并充当传输方法。
#2
0
I am a little late to the question, but it can be done via cross apply and I also used common table expressions. Depending on the table size I would suggest creating a persisted computed column on the id fields for each table assuming that the data won't change and that there was a single addressId per record or add some other key value that can be indexed on and used to limit the records that need to be converted to JSON. This is a simple example and it hasn't been tested for performance so "YMMV".
我对这个问题有点迟,但可以通过交叉应用来完成,我也使用了常用的表表达式。根据表的大小,我建议在每个表的id字段上创建一个持久计算列,假设数据不会改变,并且每个记录有一个addressId或添加一些可以索引和使用的其他键值限制需要转换为JSON的记录。这是一个简单的例子,它没有经过性能测试,因此“YMMV”。
Building Example Tables
构建示例表
DECLARE @collect_person AS TABLE
(Person NVARCHAR(MAX))
DECLARE @collect_address as TABLE
([Address] NVARCHAR(MAX))
INSERT INTO @collect_person (Person)
SELECT N'{
"id" : "P1",
"name" : "Sarah",
"addresses" : {
"addressId" : [
"ADD1",
"ADD2"
]
}
}'
INSERT INTO @collect_address ([Address])
VALUES
(N'{
"id" : "ADD1",
"city" : "San Jose",
"state" : "CA"
}')
,('{
"id" : "ADD2",
"city" : "Las Vegas",
"state" : "NV"
}')
Querying the Tables
查询表格
;WITH persons AS (
SELECT --JP.*
JP.id
,JP.name
,JPA.addressId -- Or remove the with clause for JPA and just use JPA.value as addressId
FROM @collect_person
CROSS APPLY OPENJSON([person])
WITH (
id varchar(50) '$.id'
,[name] varchar(50) '$.name'
,addresses nvarchar(max) AS JSON
) as JP
CROSS APPLY OPENJSON(JP.addresses, '$.addressId')
WITH (
addressId varchar(250) '$'
) AS JPA
)
,Addresses AS (
SELECT A.*
FROM @collect_address AS CA
CROSS APPLY OPENJSON([Address])
WITH (
id varchar(50) '$.id'
,city varchar(50) '$.city'
,state varchar(2) '$.state'
) as A
)
SELECT * FROM persons
INNER JOIN Addresses
ON persons.addressId = Addresses.id
Again this is not the ideal way to do this, but it will work and as stated before you should probably have a key field on each table that is indexed to limit the scans and JSON Parsing done on the table.
同样,这不是理想的方法,但它可以正常工作,如前所述,您应该在每个表上都有一个关键字段,该字段被编入索引以限制扫描和在表上完成的JSON解析。
There is native compilation, but it is new to me and I am not familiar with the pros and cons.
有本地编译,但它对我来说是新的,我不熟悉利弊。
Optimize JSON processing with in-memory OLTP
使用内存中的OLTP优化JSON处理