megacolorboy

Abdush Shakoor's Weblog

Writings, experiments & ideas.

How to Cherry-Pick Commits from One Git Branch to Another?

Although I had heard of Git's cherry-pick command, I never used it until a few months ago when I encountered a situation that demanded it. A client required us to work on multiple incidents and change requests (CRs), but they only wanted specific, approved updates pushed to the repository.

This created a challenge, especially when I was simultaneously working on CRs and incidents. I needed a way to isolate and push only the approved changes while keeping the rest intact. That’s when I decided to try out Git’s cherry-pick command—and it worked like a charm! It made managing my codebase much easier.

Here’s a simple walkthrough of how you can use it too:

The Scenario

I had two branches in my Git repository:

  • Branch A: The primary branch where ongoing development happens.
  • Branch B: A feature branch where I accidentally pushed updates that should partially go into Branch A.

While it’s straightforward to merge changes from one branch into another, I only needed a subset of the commits from Branch B to be applied to Branch A. This is where Git’s cherry-pick command shines.

Go along with me and follow it step-by-step:

1. Switch to the Target Branch

First, check out the branch where you want to apply the commits (in my case, Branch A):

git checkout A

2. Find the Commits to Cherry-Pick

Use the git log command to list the commits in Branch B and identify the specific commit hashes you want to cherry-pick:

git log B

You should see output similar to this:

commit abc1234 (HEAD -> B)
Author: Your Name <you@example.com>
Date:   Mon Jan 1 12:00:00 2025 +0000

    Add feature X

commit def5678
Author: Your Name <you@example.com>
Date:   Sun Dec 31 12:00:00 2024 +0000

    Fix bug Y

Take note of the commit hash(es) you need. For example, let’s say you want to cherry-pick abc1234 and def5678.

3. Cherry-Pick the Commit(s)

Single Commit

To apply a single commit from Branch B onto Branch A, use:

git cherry-pick abc1234

Multiple Commits

To apply multiple non-contiguous commits, list their hashes:

git cherry-pick abc1234 def5678

Range of Commits

To cherry-pick a range of contiguous commits, use the ^..< notation:

git cherry-pick def5678^..abc1234

This includes all commits from def5678 to abc1234, inclusive.

4. Resolve Any Conflicts

If there are conflicts during the cherry-picking process, Git will pause and notify you. Resolve the conflicts in your files, then mark them as resolved:

git add <file>

Continue the cherry-pick process:

git cherry-pick --continue

To abort the cherry-pick if things go wrong:

git cherry-pick --abort
  1. Verify the Result

Once the cherry-picking is complete, you can inspect your branch to ensure the changes were applied:

git log

You should see the cherry-picked commits in Branch A.

When is cherry-picking useful?

Cherry-picking is perfect when you need specific commits from another branch without merging all its changes. This is especially helpful in scenarios like:

  • Applying a bug fix from a feature branch to the main branch.
  • Pulling specific updates without merging unrelated work.

Bonus Tips

  • Always double-check commit hashes before cherry-picking to avoid unexpected results.
  • Use descriptive commit messages to make it easier to identify what each commit does
  • If you find yourself cherry-picking often, consider rethinking your branch workflows to reduce the need for it.

That’s it! Now you know how to cherry-pick commits in Git. It’s a small but incredibly powerful tool to keep in your Git arsenal.

How to Check if SMTP is Reachable from My Machine Using Telnet?

Recently, I learned how to test SMTP server connectivity using telnet. This is a handy way to verify if an SMTP server is reachable, troubleshoot email delivery issues, and manually interact with the server to diagnose problems.

Here’s how I did it:

1. Ensure Telnet is Installed

First, I made sure telnet was installed on my machine. Most Linux distributions come with telnet pre-installed, but if it’s missing, here’s how to install it:

For Ubuntu/Debian:

sudo apt install telnet

For CentOS/RHEL:

sudo yum install telnet

For Windows

Ironically, telnet is not enabled by default on Windows, but you can enable it:

  1. Open the Control Panel.
  2. Go to Programs > Programs and Features > Turn Windows features on or off.
  3. Check the box for Telnet Client and click OK.
  4. Once enabled, open the Command Prompt (cmd) to use Telnet.

2. Open a Telnet Connection to the SMTP Server

To connect to the SMTP server, I used the following command:

telnet <smtp_server_address> 25

Replace <smtp_server_address> with the SMTP server’s hostname or IP address. For example:

telnet smtp.example.com 25

If the connection is successful, the server responds with a 220 message, like this:

220 smtp.example.com ESMTP Postfix

This means the SMTP server is reachable and ready to accept commands.

3. Test SMTP Commands (Optional)

To further test the SMTP server, I manually interacted with it using basic SMTP commands:

Greet the Server:

I sent an EHLO command to introduce myself:

EHLO example.com

The server responded with a list of supported features.

Start an Email Transaction:

I specified the sender using the MAIL FROM command:

MAIL FROM:<test@example.com>

The server responded with 250 OK.

Specify the Recipient:

I added the recipient using the RCPT TO command:

RCPT TO:<user@example.com>

The server responded with 250 OK.

Enter the Email Data:

I typed DATA to start composing the email:

DATA

After entering the email content, I ended the message with a single . on a new line:

Subject: Test Email
This is a test email sent via Telnet.
.

The server responded with 250 OK, confirming that the email was accepted.

Quit the Session:

Finally, I ended the session with the QUIT command:

QUIT

4. Troubleshoot Connection Issues

If the connection fails, I might see an error like:

telnet: Unable to connect to remote host: Connection refused

This could mean:

  • The SMTP server is not running.
  • The port is blocked by a firewall.
  • The server uses a different port (e.g., 587 for submission or 465 for SMTPS).

To resolve this, I checked:

  • If the SMTP server is running and listening on the correct port.
  • If my firewall or network allows outbound connections to the SMTP port.

Example Workflow:

Here’s what a successful Telnet session looks like:

$ telnet smtp.example.com 25
Trying 192.0.2.1...
Connected to smtp.example.com.
Escape character is '^]'.
220 smtp.example.com ESMTP Postfix
EHLO example.com
250-smtp.example.com
250-PIPELINING
250-SIZE 10240000
250-VRFY
250-ETRN
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-8BITMIME
250 DSN
MAIL FROM:<test@example.com>
250 2.1.0 Ok
RCPT TO:<user@example.com>
250 2.1.5 Ok
DATA
354 End data with <CR><LF>.<CR><LF>
Subject: Test Email
This is a test email sent via Telnet.
.
250 2.0.0 Ok: queued as 12345
QUIT
221 2.0.0 Bye
Connection closed by foreign host.

Why This is Useful?

Using Telnet to test SMTP connectivity is a quick and effective way to:

  • Verify if the SMTP server is reachable.
  • Diagnose email delivery issues.
  • Manually test SMTP commands and server responses.

Hope you've found this article useful!

How to Change the Modification Date of a Folder Using PowerShell?

Recently, I had an interesting experience with a client who can be a bit nosy—and occasionally bossy too. They have a strict policy of deploying updates during off-peak hours on Fridays.

Like most developers, I’m not a fan of deploying on Fridays 😡.

To navigate this, I deploy the updates on the server earlier in the week (without their knowledge) and place them in a separate directory until we receive the green light to switch the website to the new directory via server settings.

If you’re wondering why I use a separate directory, it’s to ensure I have a quick rollback option if needed. Oh, and did I mention they’ve blocked git on the production server? So, no version control system (VCS) for us.

Anyway, once I deployed earlier than planned and realized they might check the deployment timestamps. To stay ahead, I wrote a quick PowerShell script to tweak the folder timestamps.

Steps to Change Folder Modification Date

Open PowerShell as Administrator

  • Search for "PowerShell" in the Start menu.
  • Right-click on it and select Run as Administrator.

Run the Command

Use the following command to set the modification date for a folder:

$path = "C:\Path\To\Your\Folder"
$date = Get-Date "2025-01-01 10:00:00"
(Get-Item $path).LastWriteTime = $date
  • Replace C:\Path\To\Your\Folder with the full path of the folder.
  • Replace 2025-01-01 10:00:00 with the desired date and time in the format yyyy-MM-dd HH:mm:ss.

Verify the Change

To confirm the modification date has been updated, use:

(Get-Item $path).LastWriteTime

Other Timestamps You Can Modify

PowerShell allows you to adjust other timestamps as well:

Creation Date:

(Get-Item $path).CreationTime = $date

Accessed Date:

(Get-Item $path).LastAccessTime = $date

Why This is Useful?

This method is great for:

  • Organizing folders with custom timestamps.
  • Simulating specific scenarios for testing.
  • Avoiding the hassle of installing third-party tools.

Situations like this has made me realize that the flexibility of PowerShell makes it the perfect tool for quick tasks. It’s amazing how a simple command can give you full control over folder metadata!

Hope you found this useful (and keep your tracks clear!)

Fixing a CoreCLR Load Error in IIS with Self-Contained Deployment

Today I learned how to resolve a tricky issue I encountered while hosting an ASP.NET Core application in IIS. The error was:

Application '/LM/W3SVC/2/ROOT' with physical root 'C:\inetpub\wwwroot\xxx_dev4' failed to load coreclr.

Diagnosing the Issue

This error pointed to a failure in loading the .NET Core Common Language Runtime (CoreCLR). It’s a common problem when the server is missing the correct version of the .NET runtime required by the application. After some investigation, I found these potential causes:

  • The .NET Core Hosting Bundle wasn’t installed on the server.
  • The required .NET runtime version wasn’t present.
  • The application pool in IIS was misconfigured.
  • There were potential permission issues for the application folder.

The Self-Contained Solution

To avoid dependency on the server’s runtime installation, I decided to try publishing the application as Self-Contained from Visual Studio 2022. This method bundles the .NET runtime and all dependencies with the application, ensuring it runs regardless of the server’s configuration.

Steps to Publish as Self-Contained

Here’s how I published my app as Self-Contained in Visual Studio 2022:

  1. Right-click on the project in Solution Explorer and choose Publish.
  2. In the Pick a publish target dialog, I selected Folder.
  3. Clicked Next and Finish to confirm the target.
  4. Opened the Settings in the Publish tab.
    • Changed Deployment Mode to Self-Contained.
    • Selected the appropriate Target Runtime for my server (e.g., win-x64).
  5. Saved the settings and clicked Publish.

The published output included all the necessary files, including the .NET runtime, making the app ready to deploy to IIS without relying on the server’s runtime installation.

Results

After deploying the self-contained package to IIS, the application ran successfully! The CoreCLR error was resolved because the app now used the bundled runtime instead of depending on the server’s configuration.

Key Takeaways

  • Self-Contained Deployment is a lifesaver when hosting environments lack the required .NET runtime.
  • It’s a great way to ensure version isolation, as the app always runs with the specific runtime version it was built with.
  • The tradeoff is a larger deployment package and manual updates for runtime patches.

Conclusion

Publishing as Self-Contained eliminated the CoreCLR dependency on the server and saved me a lot of time. If you’re deploying .NET Core apps to environments where you don’t control the runtime installation, Self-Contained is the way to go!

Hope you found this useful!

Centralizing Storage for Multiple Websites by Mounting a Shared Windows Network Drive on CentOS Using CIFS

Running more than 45 websites on a CentOS server can be a challenge, especially when it comes to managing storage. In my case, these websites are hosted across two different servers, and each site has its own storage folder for images and uploads.

To streamline this, I came up with a solution: centralizing all storage uploads in one place by hosting a shared Windows network drive and mounting it on the CentOS servers.

This way, all websites can access the same centralized storage, making management much easier.

Here’s how I did it, step by step:

Step 1: Install cifs-utils

First, you need to install the cifs-utils package, which provides the necessary tools to mount CIFS shares.

yum install cifs-utils

Step 2: Create a Service Account

Next, I created a service account on the CentOS server. This account will be used to access the shared drive. I named it svc_library_core and assigned it a UID of 5000. You can choose any name and UID that isn’t already in use.

useradd -u 5000 svc_library_core

Step 3: Create a Group for the Share

I also created a group on the CentOS server that will map to the shared drive. This group will contain all the Linux accounts that need access to the share. I named the group share_library_core and gave it a GID of 6000.

groupadd -g 6000 share_library_core

Step 4: Add Users to the Group

After creating the group, I added the necessary users to it. For example, if you want the apache user to have access to the share, you can add it like this:

usermod -G share_library_core -a apache

Step 5: Prepare Credentials

To mount the drive, you’ll need to provide the credentials for the Windows share. You can either pass them directly using the -o option or save them in a file. I opted to save them in a file for security reasons.

Switch to the root user and create a file to store the credentials:

su - root
touch /root/creds_smb_library_core
chmod 0600 /root/creds_smb_library_core
vi /root/creds_smb_library_core

Add the following lines to the file, replacing the placeholders with your actual Windows or Active Directory username and password:

username=YourWindowsUsername
password=YourWindowsPassword

Step 6: Mount the Drive

Now, you’re ready to mount the shared drive. Use the following command, replacing the placeholders with your actual values:

sudo mount -t cifs //<destination_ip>/path/to/directory /test-directory -o credentials=/root/creds_smb_library_core,uid=5000,gid=6000,iocharset=utf8,file_mode=0774,dir_mode=0775
  • //<destination_ip>/path/to/directory is the path to the shared drive on the Windows server.
  • /test-directory is the local directory where you want to mount the drive.
  • credentials=/root/creds_smb_library_core points to the file containing your credentials.
  • uid=5000 and gid=6000 map the mounted drive to the service account and group you created earlier.
  • iocharset=utf8 ensures proper character encoding.
  • file_mode=0774 and dir_mode=0775 set the file and directory permissions.

Step 7: Unmounting the Drive

If you need to unmount the drive later, you can do so with the following command:

sudo umount /test-directory

Bonus: Automount Script for Server Reboots

One challenge I faced was ensuring that the shared drives were automatically remounted in case the servers went down or were rebooted. To handle this, I wrote a simple script that mounts the drives on startup.

Create the Script

Create a new script file, for example, /usr/local/bin/mount_shares.sh:

#!/bin/bash
mount -t cifs //<destination_ip>/path/to/directory /test-directory -o credentials=/root/creds_smb_library_core,uid=5000,gid=6000,iocharset=utf8,file_mode=0774,dir_mode=0775

Make sure to replace the placeholders with your actual values.

Make the Script Executable

Set the executable permission on the script:

chmod +x /usr/local/bin/mount_shares.sh

Add the Script to Startup

To ensure the script runs on startup, add it to the /etc/rc.local file:

/usr/local/bin/mount_shares.sh

Make sure /etc/rc.local is executable:

chmod +x /etc/rc.local

Test the Script

Reboot your server and verify that the shared drives are automatically mounted.

And that’s it!

You’ve successfully mounted a shared Windows network drive on CentOS using CIFS and ensured it remounts automatically on server reboots.

This setup has made managing storage for multiple websites much more efficient and centralized.

Hope you found this article useful!

Adding Timezones to DateTime Strings in C#

Whether you're dealing with UTC, local times, or specific timezone offsets, there's a simple way to format your DateTime objects properly.

Here’s the rundown:

1. Using DateTimeOffset for Timezone-Aware DateTimes

DateTimeOffset is my go-to for working with date and time when I need to include the timezone. It represents both the date, time, and the offset from UTC.

Example: Adding UTC or Specific Timezones

DateTime utcDateTime = DateTime.UtcNow;
DateTimeOffset utcWithOffset = new DateTimeOffset(utcDateTime, TimeSpan.Zero); // UTC timezone

// For a specific timezone, like Eastern Standard Time (UTC-05:00)
TimeZoneInfo easternZone = TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time");
DateTimeOffset easternTime = TimeZoneInfo.ConvertTime(utcDateTime, easternZone);

Console.WriteLine(utcWithOffset.ToString("o")); // 2025-01-23T12:00:00.000+00:00
Console.WriteLine(easternTime.ToString("o"));  // 2025-01-23T07:00:00.000-05:00

The "o" format is particularly handy because it outputs the ISO 8601 standard, which is great for APIs or data exchange.

2. Appending Timezones to Strings Manually

Sometimes, all you need is a DateTime string with a simple timezone indicator (like +00:00 for UTC). Here's how I did that:

DateTime dateTime = DateTime.Now;
string formattedDate = dateTime.ToString("yyyy-MM-ddTHH:mm:ss.fff") + "Z"; // Z = UTC
Console.WriteLine(formattedDate); // Outputs: 2025-01-23T12:00:00.000Z

For more precision, you can include the actual offset:

string formattedDateWithOffset = dateTime.ToString("yyyy-MM-ddTHH:mm:ss.fffzzz");
Console.WriteLine(formattedDateWithOffset); // Outputs: 2025-01-23T07:00:00.000-05:00

3. Custom Format Specifiers for Timezones

Need a custom date string with the timezone? The "zzz" specifier has you covered:

DateTime now = DateTime.Now;
string dateTimeWithTimeZone = now.ToString("yyyy-MM-ddTHH:mm:ss.fff zzz");
Console.WriteLine(dateTimeWithTimeZone); // Outputs: 2025-01-23T07:00:00.000 -05:00

Why you should do this?

  • It’s straightforward to ensure your timestamps are timezone-aware.
  • You can easily adjust for specific timezones or stick with UTC for consistency.
  • The formatting options ("o", "zzz", etc.) give you flexibility to match the needs of APIs, logs, or user-facing systems.

If you've ever struggled with timezone handling in C#, give these approaches a shot. Timestamps are tricky, but with these tools in your belt, you'll nail it every time!

Hope you found this article useful!

A Faster Way to Apply Linux Permissions for Large Laravel Projects Using xargs and find

Today, I learned a more efficient way to apply Linux permissions to large projects by leveraging the power of xargs and find. This method is particularly useful when dealing with Laravel projects that have many files and directories, as it allows for parallel processing and excludes specific folders (like a storage directory) from the permission changes.

Here’s the step-by-step process I followed:

1. Set Ownership of the Entire Project Folder

To ensure the correct ownership for the entire project, I used the chown command recursively:

sudo chown -R [username]:[groupname] [foldername]

This command sets the owner and group for all files and directories within the specified folder.

2. Set Permissions for the Project Folder and Its Top-Level Directories

Next, I applied the desired permissions to the project folder and its top-level directories using chmod:

sudo chmod 2775 [foldername]

The 2775 permission sets the folder to allow read, write, and execute for the owner and group, and read and execute for others. The 2 enables the setgid bit, which ensures that new files and directories inherit the group of the parent folder.

3. Find and Change Permissions for All Directories (Except the Storage Folder) in Parallel

To apply permissions to all directories while excluding the storage folder, I combined find, xargs, and chmod:

find [foldername] -path [foldername]/storage -prune -o -type d -print0 | sudo xargs -0 -P 2 chmod 2775
  • find [foldername] -path [foldername]/storage -prune -o -type d: Finds all directories except the storage folder.
  • -print0: Outputs the results in a null-terminated format to handle spaces in filenames.
  • xargs -0 -P 2: Processes the results in parallel (2 processes at a time) for faster execution.
  • chmod 2775: Applies the desired permissions to the directories.

4. Find and Change Permissions for All Files (Except the Storage Folder) in Parallel

Similarly, I applied permissions to all files while excluding the storage folder:

find [foldername] -path [foldername]/storage -prune -o -type f -print0 | sudo xargs -0 -P 2 chmod 0664
  • -type f: Finds all files.
  • chmod 0664: Sets read and write permissions for the owner and group, and read-only for others.

5. Handle the Storage Folder Separately (If It’s Too Large)

For the storage folder, I applied permissions separately to avoid overloading the system:

find [foldername]/storage -type d -print0 | sudo xargs -0 -P 2 chmod 2775
find [foldername]/storage -type f -print0 | sudo xargs -0 -P 2 chmod 0664

This ensures that the storage folder and its contents have the correct permissions without affecting the rest of the project.

Why This Approach?

Using xargs with -P allows for parallel processing, which speeds up permission changes significantly for large projects. Additionally, excluding specific folders (like storage) ensures that sensitive or frequently accessed directories are handled separately.

This method has saved me a lot of time and effort when managing permissions for large projects.

Hope you found this article useful!

Using records for DTOs

If you're building an application or API using .NET Core in where you might have to integrate with any external services or third-party APIs, this is for you.

What are records?

It's a feature introduced since C# 9.0 that allows you to create simple and immutable data types. When representing DTOs (Data Transfer Objects), it comes in handy because the syntax in concise and it's primarily used to transfer data between layers of an application such as the Core and Presentation Layer.

Here's an example of a car using a regular class type:

public class Car
{
    public string Manufacturer { get; init; }
    public double Price { get; init; }
    public string Color { get; init; }
}

And here's a simplified version of that using record type:

public record Car(
    string Manufacturer,
    double Price,
    string Color
);

Making use of the record type to manage Data Transfer Objects makes use of clean coding practicies by writing concise yet maintainable code.

So, no need of using classes?

Well, it depends because records aren't meant to replace classes for all scenarios. It's still recommended to use regular classes for complex types with complex behavior and whereas, records can be used for simple data structures.

Hope you found this article useful!