Here is a quick way to download a file in powershell:
Invoke-WebRequest -Uri <source> -OutFile <destination>
Here is a quick way to download a file in powershell:
Invoke-WebRequest -Uri <source> -OutFile <destination>
If you want to set a variable but you want it to live forever then you can use
[System.Environment]::SetEnvironmentVariable("JAVA_HOME", "c:\program files\openjdk\jdk-13.0.2", "Machine")
That last argument can take on the values {Process
, User
, Machine
}
Of course the SQL server syntax for this doesn’t quite jive with what I want but you can use the clause WITH (DROP_EXISTING = ON)
to have SQL server handle updating an existing index keeping the old index live until the new version is ready. You use it like
CREATE NONCLUSTERED INDEX idxMonthlyParkers_vendor_expiry_issue
ON [dbo].[tblParkers] ([VendorId],[LotTimezoneExpiryDate],[LotTimezoneIssueDate])
INCLUDE ([HangTagCode],[FirstName],[LastName])
WITH (DROP_EXISTING = ON)
However that will throw an error if the index doesn’t exist (of course) so you need to wrap it with an if
if exists (SELECT *
FROM sys.indexes
WHERE name='idxMonthlyParkers_vendor_expiry_issue' AND object_id = OBJECT_ID('dbo.tblMonthlyParker'))
begin
CREATE NONCLUSTERED INDEX idxMonthlyParkers_vendor_expiry_issue
ON [dbo].[tblParkers] ([VendorId],[LotTimezoneExpiryDate],[LotTimezoneIssueDate])
INCLUDE ([HangTagCode],[FirstName],[LastName])
WITH (DROP_EXISTING = ON)
end
else
begin
CREATE NONCLUSTERED INDEX idxMonthlyParkers_vendor_expiry_issue
ON [dbo].[tblParkers] ([VendorId],[LotTimezoneExpiryDate],[LotTimezoneIssueDate])
INCLUDE ([HangTagCode],[FirstName],[LastName])
end
You can apply little transforms by just writing XML transformation on configuration files. For instance here is one for adding a section to the system.web
section of the configuration file
<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<system.web>
<machineKey xdt:Transform="Insert" decryptionKey="abc" validationKey="def" />
</system.web>
</configuration>
Here is one for removing an attribute
<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<system.web>
<compilation xdt:Transform="RemoveAttributes(debug)" />
</system.web>
</configuration>
How about changing an attribute based on matching the key?
<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
<appSettings>
<add key"MaxUsers" value="3" xdt:Transform="SetAttributes" xdt:Locator="Match(key)" />
</appSettings>
</configuration>
If you happen to be using Octopus Deploy they have a feature you can add to your IIS deployment task to run these transformations
There is a great little online testing tool at https://elmah.io/tools/webconfig-transformation-tester/ where you can plug in random things until you get them working.
Firebase can feed its data to bigtable and then you can run queries there. The syntax is SQL like but not quite because they have internal record types. So for the data that is fed across from firebase you get a structure that looks like
You can see that event_params and user_properties are these kind of collection things. The easiest way to deal with them is to flatten the structure and internally join the table against itself
SELECT r.event_name, p.key, p.value FROM `pocketgeek-auto.analytics_258213689.events_intraday_20210305` r cross join unnest(r.event_params) as p where key = 'DealerName'
This gets you a dataset like
SELECT r.event_name, p.key, p.value FROM `pocketgeek-auto.analytics_258213689.events_intraday_20210305` r cross join unnest(r.event_params) as p where key = 'DealerName' and p.value.string_value <> 'none'
is probably even better with the filter
Taking notes is hard. I think I took notes in university but I wasn’t very good at it. I’d either put everything in them making them unapproachably long or I’d put in too little information. As a result I’ve kind of shied away from taking notes in my professional career. Unfortunately, is it starting to bite me more and more as I jump around between technologies and projects. I often find myself saying “shoot, I just did this 6 months ago - how do I do that?”
Looks like by default functions log at the info
level. To change the level you can use set the application setting AzureFunctionsJobHost__logging__LogLevel__Default
to some other value like Error
or Info
.
If you want to disable adaptive sampling then that can be done in the host.json
{
"version": "2.0",
"extensions": {
"queues": {
"maxPollingInterval": "00:00:05"
}
},
"logging": {
"logLevel": {
"default": "Information"
},
"applicationInsights": {
"samplingSettings": {
"isEnabled": false
}
}
},
"functionTimeout": "00:10:00"
}
In this example adaptive sampling is turned off so you get every log message.
A thing to note is that if you crank down logging to Error you won’t see the invocations at all in the portal but they’re still running.
Got something in your terminal which is producing more output than you can scroll back through (I’m looking at you terraform plan
)? You can adjust the setting in preferences:
This problem has caught me twice now and both times it cost me hours of painful debugging to track down. I have some data that I bulk load into a database on a regular basis, about 100 executions an hour. It has been working flawlessly but then I got a warning from SQL Azure that some of my columns didn’t have a sensitivity classification on them. I believe this feature is designed to annotate columns as containing financial or personal information.
The docs don’t mention much about what annotating a column as being sensitive actually does except require some special permissions to access the data. However once we applied the sensitivity classification the bulk loading against the database stopped working and it stopped working with only the error “Internal connection error”. That error lead me down all sorts of paths that weren’t correct. The problem even happened on my local dev box where the account attached to SQL server had permission to do anything and everything. Once the sensitivity classification was removed everything worked perfectly again.
Unfortunately I was caught by this thing twice hence the blog post so that next time I google this I’ll find my own article.
I ran into a little situation today where I needed to deploy a database change that created a new user on our database. We deploy all our changes using the fantastic tool DbUp so I figured the solution would be pretty easy, something like this:
use master;
create login billyzane with password='%XerTE#%^REFGK&*^reg5t';
However when I executed this script DbUp reported that it was unable to write to the SchemaVersions
table. This is a special table in which DbUp keeps track of the change scripts it has applied. Of course it was unable to write to that table because it was back in the non-master database. My databases have different names in different environments (dev, prod,…) so I couldn’t just add another use
at the end to switch back to the original database because I didn’t know what it was called.
Fortunately, I already have the database name in a variable used for substitution against the script in DbUp. The code for this looks like
var dbName = connectionString.Split(';').Where(x => x.StartsWith("Database") || x.StartsWith("Initial Catalog")).Single().Split('=').Last();
var upgrader =
DeployChanges.To
.SqlDatabase(connectionString)
.WithScriptsEmbeddedInAssembly(Assembly.GetExecutingAssembly(), s =>
ShouldUseScript(s, baseline, sampleData))
.LogToConsole()
.WithExecutionTimeout(TimeSpan.FromMinutes(5))
.WithVariable("DbName", dbName)
.Build();
var result = upgrader.PerformUpgrade();
So using that I was able to change my script to
use master;
create login billyzane with password='%XerTE#%^REFGK&*^reg5t';
use $DbName$;
Which ran perfectly. Thanks, DbUp!