megacolorboy

Abdush Shakoor's Weblog

Writings, experiments & ideas.

How to Change the Modification Date of a Folder Using PowerShell?

Recently, I had an interesting experience with a client who can be a bit nosy—and occasionally bossy too. They have a strict policy of deploying updates during off-peak hours on Fridays.

Like most developers, I’m not a fan of deploying on Fridays 😡.

To navigate this, I deploy the updates on the server earlier in the week (without their knowledge) and place them in a separate directory until we receive the green light to switch the website to the new directory via server settings.

If you’re wondering why I use a separate directory, it’s to ensure I have a quick rollback option if needed. Oh, and did I mention they’ve blocked git on the production server? So, no version control system (VCS) for us.

Anyway, once I deployed earlier than planned and realized they might check the deployment timestamps. To stay ahead, I wrote a quick PowerShell script to tweak the folder timestamps.

Steps to Change Folder Modification Date

Open PowerShell as Administrator

  • Search for "PowerShell" in the Start menu.
  • Right-click on it and select Run as Administrator.

Run the Command

Use the following command to set the modification date for a folder:

$path = "C:\Path\To\Your\Folder"
$date = Get-Date "2025-01-01 10:00:00"
(Get-Item $path).LastWriteTime = $date
  • Replace C:\Path\To\Your\Folder with the full path of the folder.
  • Replace 2025-01-01 10:00:00 with the desired date and time in the format yyyy-MM-dd HH:mm:ss.

Verify the Change

To confirm the modification date has been updated, use:

(Get-Item $path).LastWriteTime

Other Timestamps You Can Modify

PowerShell allows you to adjust other timestamps as well:

Creation Date:

(Get-Item $path).CreationTime = $date

Accessed Date:

(Get-Item $path).LastAccessTime = $date

Why This is Useful?

This method is great for:

  • Organizing folders with custom timestamps.
  • Simulating specific scenarios for testing.
  • Avoiding the hassle of installing third-party tools.

Situations like this has made me realize that the flexibility of PowerShell makes it the perfect tool for quick tasks. It’s amazing how a simple command can give you full control over folder metadata!

Hope you found this useful (and keep your tracks clear!)

Fixing a CoreCLR Load Error in IIS with Self-Contained Deployment

Today I learned how to resolve a tricky issue I encountered while hosting an ASP.NET Core application in IIS. The error was:

Application '/LM/W3SVC/2/ROOT' with physical root 'C:\inetpub\wwwroot\xxx_dev4' failed to load coreclr.

Diagnosing the Issue

This error pointed to a failure in loading the .NET Core Common Language Runtime (CoreCLR). It’s a common problem when the server is missing the correct version of the .NET runtime required by the application. After some investigation, I found these potential causes:

  • The .NET Core Hosting Bundle wasn’t installed on the server.
  • The required .NET runtime version wasn’t present.
  • The application pool in IIS was misconfigured.
  • There were potential permission issues for the application folder.

The Self-Contained Solution

To avoid dependency on the server’s runtime installation, I decided to try publishing the application as Self-Contained from Visual Studio 2022. This method bundles the .NET runtime and all dependencies with the application, ensuring it runs regardless of the server’s configuration.

Steps to Publish as Self-Contained

Here’s how I published my app as Self-Contained in Visual Studio 2022:

  1. Right-click on the project in Solution Explorer and choose Publish.
  2. In the Pick a publish target dialog, I selected Folder.
  3. Clicked Next and Finish to confirm the target.
  4. Opened the Settings in the Publish tab.
    • Changed Deployment Mode to Self-Contained.
    • Selected the appropriate Target Runtime for my server (e.g., win-x64).
  5. Saved the settings and clicked Publish.

The published output included all the necessary files, including the .NET runtime, making the app ready to deploy to IIS without relying on the server’s runtime installation.

Results

After deploying the self-contained package to IIS, the application ran successfully! The CoreCLR error was resolved because the app now used the bundled runtime instead of depending on the server’s configuration.

Key Takeaways

  • Self-Contained Deployment is a lifesaver when hosting environments lack the required .NET runtime.
  • It’s a great way to ensure version isolation, as the app always runs with the specific runtime version it was built with.
  • The tradeoff is a larger deployment package and manual updates for runtime patches.

Conclusion

Publishing as Self-Contained eliminated the CoreCLR dependency on the server and saved me a lot of time. If you’re deploying .NET Core apps to environments where you don’t control the runtime installation, Self-Contained is the way to go!

Hope you found this useful!

Centralizing Storage for Multiple Websites by Mounting a Shared Windows Network Drive on CentOS Using CIFS

Running more than 45 websites on a CentOS server can be a challenge, especially when it comes to managing storage. In my case, these websites are hosted across two different servers, and each site has its own storage folder for images and uploads.

To streamline this, I came up with a solution: centralizing all storage uploads in one place by hosting a shared Windows network drive and mounting it on the CentOS servers.

This way, all websites can access the same centralized storage, making management much easier.

Here’s how I did it, step by step:

Step 1: Install cifs-utils

First, you need to install the cifs-utils package, which provides the necessary tools to mount CIFS shares.

yum install cifs-utils

Step 2: Create a Service Account

Next, I created a service account on the CentOS server. This account will be used to access the shared drive. I named it svc_library_core and assigned it a UID of 5000. You can choose any name and UID that isn’t already in use.

useradd -u 5000 svc_library_core

Step 3: Create a Group for the Share

I also created a group on the CentOS server that will map to the shared drive. This group will contain all the Linux accounts that need access to the share. I named the group share_library_core and gave it a GID of 6000.

groupadd -g 6000 share_library_core

Step 4: Add Users to the Group

After creating the group, I added the necessary users to it. For example, if you want the apache user to have access to the share, you can add it like this:

usermod -G share_library_core -a apache

Step 5: Prepare Credentials

To mount the drive, you’ll need to provide the credentials for the Windows share. You can either pass them directly using the -o option or save them in a file. I opted to save them in a file for security reasons.

Switch to the root user and create a file to store the credentials:

su - root
touch /root/creds_smb_library_core
chmod 0600 /root/creds_smb_library_core
vi /root/creds_smb_library_core

Add the following lines to the file, replacing the placeholders with your actual Windows or Active Directory username and password:

username=YourWindowsUsername
password=YourWindowsPassword

Step 6: Mount the Drive

Now, you’re ready to mount the shared drive. Use the following command, replacing the placeholders with your actual values:

sudo mount -t cifs //<destination_ip>/path/to/directory /test-directory -o credentials=/root/creds_smb_library_core,uid=5000,gid=6000,iocharset=utf8,file_mode=0774,dir_mode=0775
  • //<destination_ip>/path/to/directory is the path to the shared drive on the Windows server.
  • /test-directory is the local directory where you want to mount the drive.
  • credentials=/root/creds_smb_library_core points to the file containing your credentials.
  • uid=5000 and gid=6000 map the mounted drive to the service account and group you created earlier.
  • iocharset=utf8 ensures proper character encoding.
  • file_mode=0774 and dir_mode=0775 set the file and directory permissions.

Step 7: Unmounting the Drive

If you need to unmount the drive later, you can do so with the following command:

sudo umount /test-directory

Bonus: Automount Script for Server Reboots

One challenge I faced was ensuring that the shared drives were automatically remounted in case the servers went down or were rebooted. To handle this, I wrote a simple script that mounts the drives on startup.

Create the Script

Create a new script file, for example, /usr/local/bin/mount_shares.sh:

#!/bin/bash
mount -t cifs //<destination_ip>/path/to/directory /test-directory -o credentials=/root/creds_smb_library_core,uid=5000,gid=6000,iocharset=utf8,file_mode=0774,dir_mode=0775

Make sure to replace the placeholders with your actual values.

Make the Script Executable

Set the executable permission on the script:

chmod +x /usr/local/bin/mount_shares.sh

Add the Script to Startup

To ensure the script runs on startup, add it to the /etc/rc.local file:

/usr/local/bin/mount_shares.sh

Make sure /etc/rc.local is executable:

chmod +x /etc/rc.local

Test the Script

Reboot your server and verify that the shared drives are automatically mounted.

And that’s it!

You’ve successfully mounted a shared Windows network drive on CentOS using CIFS and ensured it remounts automatically on server reboots.

This setup has made managing storage for multiple websites much more efficient and centralized.

Hope you found this article useful!

Adding Timezones to DateTime Strings in C#

Whether you're dealing with UTC, local times, or specific timezone offsets, there's a simple way to format your DateTime objects properly.

Here’s the rundown:

1. Using DateTimeOffset for Timezone-Aware DateTimes

DateTimeOffset is my go-to for working with date and time when I need to include the timezone. It represents both the date, time, and the offset from UTC.

Example: Adding UTC or Specific Timezones

DateTime utcDateTime = DateTime.UtcNow;
DateTimeOffset utcWithOffset = new DateTimeOffset(utcDateTime, TimeSpan.Zero); // UTC timezone

// For a specific timezone, like Eastern Standard Time (UTC-05:00)
TimeZoneInfo easternZone = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
DateTimeOffset easternTime = TimeZoneInfo.ConvertTime(utcDateTime, easternZone);

Console.WriteLine(utcWithOffset.ToString("o")); // 2025-01-23T12:00:00.000+00:00
Console.WriteLine(easternTime.ToString("o"));  // 2025-01-23T07:00:00.000-05:00

The "o" format is particularly handy because it outputs the ISO 8601 standard, which is great for APIs or data exchange.

2. Appending Timezones to Strings Manually

Sometimes, all you need is a DateTime string with a simple timezone indicator (like +00:00 for UTC). Here's how I did that:

DateTime dateTime = DateTime.Now;
string formattedDate = dateTime.ToString("yyyy-MM-ddTHH:mm:ss.fff") + "Z"; // Z = UTC
Console.WriteLine(formattedDate); // Outputs: 2025-01-23T12:00:00.000Z

For more precision, you can include the actual offset:

string formattedDateWithOffset = dateTime.ToString("yyyy-MM-ddTHH:mm:ss.fffzzz");
Console.WriteLine(formattedDateWithOffset); // Outputs: 2025-01-23T07:00:00.000-05:00

3. Custom Format Specifiers for Timezones

Need a custom date string with the timezone? The "zzz" specifier has you covered:

DateTime now = DateTime.Now;
string dateTimeWithTimeZone = now.ToString("yyyy-MM-ddTHH:mm:ss.fff zzz");
Console.WriteLine(dateTimeWithTimeZone); // Outputs: 2025-01-23T07:00:00.000 -05:00

Why you should do this?

  • It’s straightforward to ensure your timestamps are timezone-aware.
  • You can easily adjust for specific timezones or stick with UTC for consistency.
  • The formatting options ("o", "zzz", etc.) give you flexibility to match the needs of APIs, logs, or user-facing systems.

If you've ever struggled with timezone handling in C#, give these approaches a shot. Timestamps are tricky, but with these tools in your belt, you'll nail it every time!

Hope you found this article useful!

A Faster Way to Apply Linux Permissions for Large Laravel Projects Using xargs and find

Today, I learned a more efficient way to apply Linux permissions to large projects by leveraging the power of xargs and find. This method is particularly useful when dealing with Laravel projects that have many files and directories, as it allows for parallel processing and excludes specific folders (like a storage directory) from the permission changes.

Here’s the step-by-step process I followed:

1. Set Ownership of the Entire Project Folder

To ensure the correct ownership for the entire project, I used the chown command recursively:

sudo chown -R [username]:[groupname] [foldername]

This command sets the owner and group for all files and directories within the specified folder.

2. Set Permissions for the Project Folder and Its Top-Level Directories

Next, I applied the desired permissions to the project folder and its top-level directories using chmod:

sudo chmod 2775 [foldername]

The 2775 permission sets the folder to allow read, write, and execute for the owner and group, and read and execute for others. The 2 enables the setgid bit, which ensures that new files and directories inherit the group of the parent folder.

3. Find and Change Permissions for All Directories (Except the Storage Folder) in Parallel

To apply permissions to all directories while excluding the storage folder, I combined find, xargs, and chmod:

find [foldername] -path [foldername]/storage -prune -o -type d -print0 | sudo xargs -0 -P 2 chmod 2775
  • find [foldername] -path [foldername]/storage -prune -o -type d: Finds all directories except the storage folder.
  • -print0: Outputs the results in a null-terminated format to handle spaces in filenames.
  • xargs -0 -P 2: Processes the results in parallel (2 processes at a time) for faster execution.
  • chmod 2775: Applies the desired permissions to the directories.

4. Find and Change Permissions for All Files (Except the Storage Folder) in Parallel

Similarly, I applied permissions to all files while excluding the storage folder:

find [foldername] -path [foldername]/storage -prune -o -type f -print0 | sudo xargs -0 -P 2 chmod 0664
  • -type f: Finds all files.
  • chmod 0664: Sets read and write permissions for the owner and group, and read-only for others.

5. Handle the Storage Folder Separately (If It’s Too Large)

For the storage folder, I applied permissions separately to avoid overloading the system:

find [foldername]/storage -type d -print0 | sudo xargs -0 -P 2 chmod 2775
find [foldername]/storage -type f -print0 | sudo xargs -0 -P 2 chmod 0664

This ensures that the storage folder and its contents have the correct permissions without affecting the rest of the project.

Why This Approach?

Using xargs with -P allows for parallel processing, which speeds up permission changes significantly for large projects. Additionally, excluding specific folders (like storage) ensures that sensitive or frequently accessed directories are handled separately.

This method has saved me a lot of time and effort when managing permissions for large projects.

Hope you found this article useful!

Using records for DTOs

If you're building an application or API using .NET Core in where you might have to integrate with any external services or third-party APIs, this is for you.

What are records?

It's a feature introduced since C# 9.0 that allows you to create simple and immutable data types. When representing DTOs (Data Transfer Objects), it comes in handy because the syntax in concise and it's primarily used to transfer data between layers of an application such as the Core and Presentation Layer.

Here's an example of a car using a regular class type:

public class Car
{
    public string Manufacturer { get; init; }
    public double Price { get; init; }
    public string Color { get; init; }
}

And here's a simplified version of that using record type:

public record Car(
    string Manufacturer,
    double Price,
    string Color
);

Making use of the record type to manage Data Transfer Objects makes use of clean coding practicies by writing concise yet maintainable code.

So, no need of using classes?

Well, it depends because records aren't meant to replace classes for all scenarios. It's still recommended to use regular classes for complex types with complex behavior and whereas, records can be used for simple data structures.

Hope you found this article useful!

Using Property Pattern Matching in C#

Alright, we've all been there especially when it comes to writing IF conditions with multiple checks in a single line like the example below:

if(employee != null && employee.Age > 20 && employee.Address.City == "Dubai")
{
    // Do something here...
}

As you can see, this condition is straight-forward but can sometimes hinder readability and doesn't really ensure type safety.

Here's where you can make use Property Pattern Matching.

What's Property Pattern Matching?

This type of pattern matching is used to match the defined expression if the conditions/nested conditions successfully matches the corresponding property or the field of the result and output of the result is not null. Here's an example of how it might be like:

if(employee is { Age: > 20 && Address.City == "Dubai" })
{
    // Do something here...
}

This pattern elevates your code's readability, ensures type safety and enable flexibility in terms of data manipulation.

Hope you found this tip useful!

Remove Control Flags

There are times when we write flags to control the flow of the program. I mean, it's good to use control flags (sometimes) but it's not necessary to be used in most cases. Like the one shown below:

public bool HandlePaymentRecords(List<PaymentRecord> paymentRecords)
{
    bool isSuccessful = true;

    foreach(var paymentRecord in paymentRecords)
    {
        if(!ProcessPaymentRecord(paymentRecord))
        {
            isSuccessful = false;
            break;
        }
    }

    return isSuccessful;
}

The name of method states it's purpose but it seems a tad bit complicated and might confuse some developers who get on-board with it.

Instead, you can make of use of a refactoring technique called Remove Control Flags and modify the method to look like this:

public bool HandlePaymentProcess(List<PaymentRecord> paymentRecords)
{
    foreach(var paymentRecord in paymentRecords)
    {
        if(!ProcessPaymentRecord(paymentRecord))
        {
            return false;
        }
    }

    return true;
}

The logic is the same but now the code is now simplified, less complex to read and much more maintainable.

Less code is good code.

Hope this helps you out!