The behavior of only using the Database’s Collation is consistent with what we saw in the first test.
We can check what happens when we an explicit Collation is applied.
SQL Server provides several "standard" techniques by which to read and write to files but, just occasionally, they aren't quite up to the task at hand - especially when dealing with large strings or relatively unstructured data.
Phil Factor provides some T-SQL stored procedures, based on use of the File System Object (FSO), that may just get you out of a tight corner...
Yes, that’s it: of those three sentences in the note / warning, The documentation states that it will be one or the other: “the code page that corresponds to the default collation of the database or column”.
” if it doesn’t match itself (due to being transformed). Which Collation Is It VALUES ('2', '2', '2', '2', '2', '2'), -- Possible "best fit" mapping ('? If the Collation of the column being referenced was being used, then “Subscript 2” would still match the “2” in the Latin1 columns, but it would then match the “?
However, we still need to see what happens when the character in the Code Page of the Database’s Collation, but not in the Code Page of the referenced column.
Regarding the warning in the Microsoft documentation: it’s safe to assume that they’re speaking in terms of starting with a string that is already in the Code Page of the Database’s Collation and would not experience any transformation outside of the referenced column situation that they are trying to warn about.
Well, in order to find out if it is one or the other or both or even neither, we will consult the primary authority on this topic: SQL Server. Which Collation Is It ( [Latin1_8bit] VARCHAR(10) COLLATE Latin1_General_100_BIN2, [Latin1_Unicode] NVARCHAR(10) COLLATE Latin1_General_100_BIN2, [Hebrew_8bit] VARCHAR(10) COLLATE Hebrew_100_BIN2, [Hebrew_Unicode] NVARCHAR(10) COLLATE Hebrew_100_BIN2, [Korean_8bit] VARCHAR(10) COLLATE Korean_100_BIN2, [Korean_Unicode] NVARCHAR(10) COLLATE Korean_100_BIN2 ); ) because it behaves differently in each of the three Collations that we are testing with.But, even in the second query, the data in Code Page 949 (used by the Korean Collations).This is because queries are parsed (for proper syntax, variable name resolution, etc.) before anything is done with the query.For reading tabular data from a file, whether character-delimited or binary, there is nothing that replaces the hoary old Bulk Copy Program (BCP), which underlies more esoteric methods such as Bulk Insert.It is possible to read text-based delimited files with ODBC, simple files can be read and written-to using xp_cmdshell, and you will find that OSQL is wonderful for writing results to file, but occasionally I’ve found I need to do more than this.