Tuesday, December 13, 2022

[Python] Push file to Azure Blob form Local HD or from Public URL

 Sample Python code, python version should be 3.7 or higher.



# -*- coding: utf-8 -*-
#!/usr/bin/python
from __future__ import print_function
import pymysql.cursors 
import sys
import re
#pip3 install azure-storage-blob azure-identity
import os, uuid
from azure.identity import DefaultAzureCredential
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
import time


def Upload2AzureFromURL(NewFileName,FileURL,BibID):
    try:
        connectionString = "DefaultEndpointsProtocol=https;AccountName=**********;AccountKey=*****;EndpointSuffix=core.windows.net"
        containerName = "newcontainer"
        blob_service_client = BlobServiceClient.from_connection_string(connectionString)
        blob_client = blob_service_client.get_blob_client(container=containerName, blob=NewFileName)
        blob_client.start_copy_from_url(FileURL,metadata={"BibID":str(BibID)})
        for i in range(10):
            props = blob_client.get_blob_properties()
            status = props.copy.status
            print("Copy status: " + status)
            if status == "success":
                # Copy finished
                print("success...")
                break
            time.sleep(10)

        if status != "success":
            # if not finished after 100s, cancel the operation
            props = blob_client.get_blob_properties()
            print(props.copy.status)
            copy_id = props.copy.id
            copied_blob.abort_copy(copy_id)
            props = copied_blob.get_blob_properties()
            print(props.copy.status)



    except Exception as ex:
        print('Exception:')
        print(ex)

def Upload2AzureFromLocalHD(NewFileName,FullFilePath,BibID):
    try:
        connectionString = "DefaultEndpointsProtocol=https;AccountName=********;AccountKey=****;EndpointSuffix=core.windows.net"
        containerName = "newcontainer"
        blob_service_client = BlobServiceClient.from_connection_string(connectionString)
        blob_client = blob_service_client.get_blob_client(container=containerName, blob=NewFileName)
        blob_client.set_blob_metadata(metadata={"BibID":str(BibID)})
        with open(file=FullFilePath, mode="rb") as data:
            blob_client.upload_blob(data)



    except Exception as ex:
        print('Exception:')
        print(ex)






mrqoom_db = pymysql.connect(host="**********",    # your host, usually localhost
                     user="***********",         # your username
                     passwd="***********",   # your password
                     db="***",  
                     charset='utf8')


mrqoom_db.autocommit(True)
mrqoom_cur = mrqoom_db.cursor(pymysql.cursors.DictCursor)
mrqoom_cur.execute("SET session group_concat_max_len=30000;")

mrqoom_cur.execute("""
                    SELECT id,BibID,url FROM `files` 
                   """) 

try:
    MarcRecords=mrqoom_cur.fetchall()
    for row in MarcRecords:
        try:
            print('Try Download FT for ID: '+str(row["id"])+'     '+str(row["url"]))
            NewFileName=str(row["url"]).rsplit('/', 1)[-1]
            Upload2AzureFromURL(NewFileName,str(row["url"]),str(row["BibID"]))
        except Exception as e:
            print ("Fail!")
            print (str(e))
except Exception as e:
    print ("Fail!")
    print (str(e))

mrqoom_cur.close()
mrqoom_db.close()

print ("============= END ====================")

Monday, November 14, 2022

gRPC

About gRPC 

gRPC is an open source remote procedure call (RPC) system initially developed at Google. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, blocking or nonblocking bindings, and cancellation and timeouts.

This would look similar to websockets but underlying difference is it works on HTTP2 protocol and the data format for request response would be bound to Protobuf, cannot use JSON or XML. But protobuf is more compact and light weight than the latter. The connection would be persistent and client can invoke the methods in remote server through the connection as needed. It offers 4 types of method call, traditional request/response model, server-side streaming, client side streaming and bi-directional streaming.

What are protocol buffers?
Protocol buffers are mechanism for serializing structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be structured once, then you can use special generated source code to easily write and read your structured data to and from a variety of data streams and using a variety of languages.


Notes: we can use gRPC from different languages  like C#, Dart, Java, Node, PHP, Python,...


Http2 vs Http1.1



Http 1.1 only supports Request/Response pattern, and not supports compress headers, create new TCP connection per request. So, if we visit a page that contains one image and one CSS, this means create 3 TCP connections!


Http 2.0. one TCP connection will use for multiple Requests/Responses, Supports Server Push, headers and data are both compressed to binary data (less bandwidth), support send multiple messages at the same time. SSL will be required by default.  


How to Enable http 2.0 on IIS?
IIS running on Windows 10 or Windows Server 2016 supports HTTP/2 by default, but the connection should be https.
you shouldn't need to change anything in your application for HTTP/2 to work.

here is how to install IIS and enable local SSL for testing...







How to validate that current connection using http2.0 ?

Launch your browser from your Windows 10 or Windows Server 2016 machine and hit F12, (or go to Settings and enable F12 Developer Tools), and then switch to the Network tab. Browse to https://localhost and voila, you are on HTTP/2!

if "Protocol" is not exists, then right click > Header Options > Protocol




Types of gRPC APIs



1. Unary
It is a classic request-response API. This is what everyone is using mostly as REST APIs. The Client sends a request and the server sends a response to that request.

2. Server Streaming
In this case, the client will send a request to the server and the server will keep sending data like a stream.

3. Client Streaming
It is a bit opposite to Server Streaming. Here client will send a stream of requests and expects a single response. The server will send a single response. May be after the end of all the requests or in the middle, it depends on the implementation.

4. Bi-Directional Streaming
It is a kind of combination of both Server Streaming and Client Streaming in the sense that both server and client will send a stream of requests and responses. The client will initiate a connection and start streaming messages in the request and the server will start streaming the response to the client.







gRPC Scalability 
Server : Async
Client: sync/ Async


gRPC Performance

let us compare Data Streaming via GRPC vs MQTT vs Websockets, which is better?


Above results clearly shows that GRPC wins because of persistent connection and protobuf data format, which is lightweight.

Conclusion
Out of 3 options, it depends on individual requirements to choose one. If it is collecting data from sensors and IoT device the choice would always be MQTT. But if data streaming is between devices which doesn't have resource constraints GRPC and websockets can be an options. For my requirement GRPC is winner.













Sunday, October 30, 2022

Create a temp URL valid for one minute only for file on Azure Blob Storage

 

How to allow dynamic URL in Azure using c# ?


        CloudStorageAccount account = CloudStorageAccount.Parse("yourStringConnection");
        CloudBlobClient serviceClient = account.CreateCloudBlobClient();

        var container = serviceClient.GetContainerReference("yourContainerName");
        container
            .CreateIfNotExistsAsync()
            .Wait();

        CloudBlockBlob blob = container.GetBlockBlobReference("test/helloworld.txt");
        //blob.UploadTextAsync("Hello, World!").Wait();

        SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy();

        // define the expiration time
        policy.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1);

        // define the permission
        policy.Permissions = SharedAccessBlobPermissions.Read;

        // create signature
        string signature = blob.GetSharedAccessSignature(policy);

        // get full temporary uri
        Console.WriteLine(blob.Uri + signature);



From .NET 5, Entity Framework Core provides a method that is available to retrieve the SQL statement from a Linq query without executing it, which can be done by the ToQueryString() method of IQueryable 

Sunday, October 23, 2022

DB2 Replication Options

 Replication Options

1) Db2 High availability disaster recovery (HADR): 
Active/Passive Replication, supports up to three remote standby servers.
when active DB Down, With HADR, a standby database can take over in seconds.  the original primary database can be brought back up and returned it to its primary database status, which is known as failback. A failback can be initiated when the old primary database is consistent with the new primary database. After reintegrating the old primary database into the HADR setup as a standby database, the database roles are switched to enable the original primary database as the primary database.





2) Db2 pureScale: designed for continuous availability,  All software components are installed and configured from a single host. pureScale scaling your database solution using Multiple database servers, which are known as members, process incoming database requests; these members operate in a clustered system and share data. You can transparently add more members to scale out to meet even the most demanding business needs. There are no application changes to make, no data to redistribute, and no performance tuning to do.



3) IBM Info-Sphere Data Replication product (IIDR): 

IIDR has three alternative components

  1. Change data capture (CDC): for heterogeneous databases, ie, replication between Oracle and DB2.
  2. SQL Replication: old way, used in broadcast topology, create staging tables in source DB which cost increase DB size to capture all DB changes.
  3. Q Replication: use IBM MQ, capture all DB changes inside MQ, high volume, low latency.






    Q Replication: the best solution in IIDR

    Q Replication is a high-volume, low-latency replication solution that uses WebSphere MQ message queues to transmit transactions between source and target databases



    Q Replication High availability scenarios

    1. Two-nodes for failover: Update workloads execute on a primary node, Second node not available for any workload
    2. Two-nodes with one read-only node for query offloading: Update workloads execute on a primary node, Read-only workloads are allowed on a second node
    3.  Two-nodes, Active/Active, with strict conflict rules: Update workloads execute on two different nodes, Conflicts are managed, Deployed only when conflicts can be carefully managed.
    4. Three-nodes with at least one read-only node: Update workloads execute on a primary node, Read-only workloads execute on second and third nodes, Conflicts are tightly managed
    5. Three-nodes, Active/Active, with strict conflict rules: Update workloads execute on three different nodes, Conflicts are managed, using data partitioning, workload distribution, use when have unstable/slow connection topologies.


    Q Replication components

    1) The Q Capture and Q Apply programs and their associated DB2 control tables (listed as Capture, Apply, and Contr in the diagram)

    2) The Administration tools that include the Replication Center (db2rc) and the ASNCLP command-line interface

    3) The Data Replication Dashboard and the ASNMON utility that deliver a live monitoring web tool and an alert monitor respectively

    4) Additional utilities like the ASNTDIFF table compare program and the asnqmfmt program to browse Q Replication messages from a queue WebSphere MQ


    Notes:

    - The Q Capture program is log-based
    - The Q Apply program applies in parallel multiple transactions to the target DB2
    - The Q Capture program reads the DB2 recovery log for changes to a source table defined to replication. The program then sends transactions as WebSphere MQ messages over queues, where they are read and applied to target tables by the Q Apply program.
    - Asynchronous delivery: Q Apply program receive transactions without having to connect to the source database or subsystem. Both the Q Capture and Q Apply programs operate independently of each other—neither one requires the other to be operating.



    InfoSphere Information Server

    InfoSphere Information Server is an IBM data integration platform that provides a comprehensive set of tools and capabilities for managing and integrating data across various sources and systems. It is designed to help organizations address data quality, data integration, data transformation, and data governance challenges.

    InfoSphere Information Server enables businesses to access, transform, and deliver trusted and timely data for a wide range of data integration use cases, such as data warehousing, data migration, data synchronization, and data consolidation. It offers a unified and scalable platform that supports both batch processing and real-time data integration.


    Key components of InfoSphere Information Server include:

    1) DataStage: A powerful ETL (Extract, Transform, Load) tool that allows users to design, develop, and execute data integration jobs. It provides a graphical interface for building data integration workflows and supports a wide range of data sources and targets.

    2) QualityStage: A data quality tool that helps identify and resolve data quality issues by profiling, cleansing, standardizing, and matching data. It incorporates various data quality techniques and algorithms to improve the accuracy and consistency of data.

    3) Information Governance Catalog: A metadata management tool that enables users to capture, store, and manage metadata about data assets, including data sources, data definitions, data lineage, and data ownership. It helps organizations establish data governance practices and provides a centralized repository for managing and searching metadata.

    4) Data Click: A self-service data preparation tool that allows business users to discover, explore, and transform data without the need for extensive technical skills. It provides an intuitive and user-friendly interface for data profiling, data cleansing, and data enrichment.

    5) Information Analyzer: A data profiling and analysis tool that helps assess the quality, structure, and content of data. It allows users to discover data anomalies, identify data relationships, and generate data quality reports.

    InfoSphere Information Server provides a comprehensive and integrated platform for managing the entire data integration lifecycle, from data discovery and profiling to data quality management and data delivery. It helps organizations improve data consistency, data accuracy, and data governance, leading to better decision-making and increased operational efficiency.








    for more information visit
    https://www.youtube.com/watch?v=U_PN8QLTec8



    Tuesday, October 11, 2022

    Big O notation

     Big O notation is used to classify algorithms according to how their run time or memory space requirements grow as the input size grows.




    From chart, O(1) has the least complexity, and O(n!) is the most complex.


    Time ComplexityAn algorithm is said to be 

    1) Constant time (also written as  time) if the value of  is bounded by a value that does not depend on the size of the input. For example, accessing any single element in an array takes constant time as only one operation has to be performed to locate it. In a similar manner, finding the minimal value in an array sorted in ascending order; it is the first element. However, finding the minimal value in an unordered array is not a constant time operation as scanning over each element in the array is needed in order to determine the minimal value. Hence it is a linear time operation, taking  time. 

    2) logarithmic time when commonly found on binary trees or binary search.
    An example of logarithmic time is given by dictionary search. Consider a dictionary D which contains n entries, sorted by alphabetical order.

    3)  Linear algorithm – O(n) – Linear Search. 

    4)  Superlinear algorithm – O(n log n) – Heap Sort, Merge Sort. 

    5) Polynomial algorithm – O(n^c) – Selection Sort, Insertion Sort, Bucket Sort. 



    Space Complexity, measure the memory usage amount 

    1) Ideal algorithm - O(1) - Linear Search, Binary Search, Selection Sort, Insertion Sort, Heap Sort.

    2) Logarithmic algorithm - O(log n) - Top down merge sort for linked list .

    3) Linear algorithm - O(n) - Quick Sort, Merge Sort with recursive merge.

    4) Sub-linear algorithm - O(n+k) - Radix Sort.


    Merge Sort can use consume O(log(n)), O(n) or O(1) stack space!!,
    A top down merge sort for linked list will consume O(log(n)) stack space,
    and it's slower than a bottom up approach due to scanning of lists to split them. merge sort can take O(n) stack space due to the recursive merge().
    A bottom up merge sort for linked list uses a small (25 to 32) fixed size array of references (or pointers) to nodes, which would meet the O(1) space requirement.

    Link to wiki article:
    https://en.wikipedia.org/wiki/Merge_sort#Bottom-up_implementation_using_lists


    Monday, September 12, 2022

    Read IP CAM Video Stream using Python

     Most of IP CAM like Dahua support RTSP protocol, Connect your IP cam to the network and get the camera IP.

    Prerequisites:
    1- Camera Feed must be H.264 and can't be H.265
    2- Camera Feed Bit Rate should be 4096 or lower

    SAMPLE IP CAM Admin Panel



    Expected RTSP URL will be like this 

    Dahua Main Stream:
    rtsp://admin:password@192.168.1.102:554/cam/realmonitor?channel=1&subtype=0

    Dahua Sub Stream:
    rtsp://admin:password@192.168.1.102:554/cam/realmonitor?channel=1&subtype=1


    Then you can access it using VLC or Python script like this


    How to install open CV?

    there are four OpenCV packages that are pip-installable on the PyPI repository:

    opencv-python: This repository contains just the main modules of the OpenCV library. If you’re a PyImageSearch reader, you do not want to install this package.

    opencv-contrib-python: The opencv-contrib-python repository contains both the main modules along with the contrib modules. This is the library we recommend you install, as it includes all OpenCV functionality.

    opencv-python-headless: Same as opencv-python but no GUI functionality. Useful for headless systems.

    opencv-contrib-python-headless: Same as opencv-contrib-python but no GUI functionality. Useful for headless systems.

    You DO NOT want to install both opencv-python and opencv-contrib-python. Pick ONE of them.

    pip install opencv-contrib-python will install the next package 
     opencv_contrib_python-4.6.0.66-cp36-abi3-win_amd64.whl

    Thursday, September 1, 2022

    Prevent two connections from reading same row in DB2

     How can user Pick a DB row, and each user get a unique DB row from DB2 Database?

    Solution Steps

    1. Connection one queries the database table for a row. Reads first row and locks it while reading for update.
    2. Connection two queries the database table for a row. Should not be able to read the first row, should read second row if available and lock it for update.
    3. Similar logic for connection 3, 4, etc..

    If we have 1000 users, and each user should select different row from Table1, we can archive that by add new column LOCKED to Table1 and select the unlocked row then lock it and return Row ID in one step.


    SELECT ID FROM FINAL TABLE
    (
    update Table1
    set "LOCKED"=1
    where ID in (select ID from Table1 where "LOCKED"=0 FETCH FIRST 1 ROW ONLY)
    )


    How to get default data after insert new record?

    CREATE TABLE EMPSAMP
      (EMPNO     INTEGER GENERATED ALWAYS AS IDENTITY,
       NAME      CHAR(30),
       SALARY    DECIMAL(10,2),
       DEPTNO    SMALLINT,
       LEVEL     CHAR(30),
       HIRETYPE  VARCHAR(30) NOT NULL WITH DEFAULT 'New Hire',
       HIREDATE  DATE NOT NULL WITH DEFAULT);


    Retrieving generated column values


    SELECT EMPNO, HIRETYPE, HIREDATE  FROM FINAL TABLE
    (INSERT INTO EMPSAMP (NAME, SALARY, DEPTNO, LEVEL)                   
    VALUES ('Mary Smith', 35000.00, 11, 'Associate'));


    Another Example

    select data while update data

    SELECT SUM(SALARY) INTO :salary FROM FINAL TABLE
    (
          UPDATE EMP SET SALARY = SALARY * 1.05    WHERE JOB = 'DESIGNER'
    );




    DB2 Isolation levels (start from DB2 9.7)

    if you inside transaction and update DB and try to run sql that needs to select data before uncommuted updates!!

    DB2 has 4 isolation levels

    1. Read committed [read stability (RS)]
    2. Read uncommitted (UR)
    3. Serializable (CS) : CS is specified the transaction will never read data that is not yet committed; only committed data can be read.
    4. Repeatable read (RR)

    You also can use the WITH parameter on a SELECT statement to set the isolation level of a single SQL statement.


    Select count(*) from table1 with RS   --this will count only the committed rows

    Select count(*) from table1 with UR  --this will count all rows committed/ and uncommitted rows


    Serializable (CS) 
     
    CS is specified the transaction will never read data that is not yet committed; only committed data can be read.  Cursor stability is the current default isolation level if none is specified at BIND time.

    Repeatable read (RR)

    use RR page locking, consider a reporting program that scans a table to produce a detail report, and then scans it again to produce a summarized managerial report. If the program is bound using CS, the results of the first report might not match the results of the second.

    If the program used an RR isolation level rather than CS, an UPDATE that occurs after the production of the first report but before the second would not have been allowed. The program would have maintained the locks it held from the generation of the first report and the updater would be locked out until the locks were released.


    Read committed [read stability (RS)]

    Read stability is similar in functionality to the RR isolation level, but a little less. A retrieved row or page is locked until the end of the unit of work; no other program can modify the data until the unit of work is complete, but other processes can insert values that might be read by your application if it accesses the row a second time.


    Read uncommitted (UR)

     The UR isolation level provides read-through locks, also know as dirty read or read uncommitted. Using UR can help to overcome concurrency problems. When you're using an uncommitted read, an application program can read data that has been changed but is not yet committed. UR can be a performance booster, too, because application programs bound using the UR isolation level will read data without taking locks.





    for more information

    https://www.ibm.com/docs/en/db2/11.5?topic=issues-isolation-levels



    Sunday, August 21, 2022

    use React in MVC project

     install NuGet packages

    React.AspNet
    JavaScriptEngineSwitcher.V8
    JavaScriptEngineSwitcher.Extensions.MsDependencyInjection

    JavaScriptEngineSwitcher.V8.Native.win-x64 [old]
    or
    Microsoft.ClearScript.V8.Native.win-x64 [new]


    Monday, August 8, 2022

    Core Concept and Design Patterns

    OOP core Concepts:

    -Inheritance
    -Encapsulation
    -Polymorphism
    -Data Abstraction


    Abstract class: can't used directly, child class should inherit from abstract class and override the abstract methods.

    Encapsulation means all the necessary data and methods are bind together and all the unnecessary details are hidden to the normal user. So Encapsulation is the process of binding data members and methods of a program together to do a specific job, without revealing unnecessary details.


    Polymorphism (overload, override)
    refers to the process by which some code, data, method, or object behaves differently under different circumstances or contexts.

    For example, think of a base class called Animal that has a method called animalSound(). Derived classes of Animals could be Pigs, Cats, Dogs, Birds - And they also have their own implementation of an animal sound




    How to prevent override? final keyword means I can't override this variable or method.

    Tuesday, July 19, 2022

    IIB

     IIB connect endpoints and make data integration simple.
    Transform and route data from anywhere, to anywhere using graphical mapping. java, ESQL, and XSL
    Publish/Subscribe with IBM MQ and MQTT









    IIB main components 


    Integration Node: runtime engine for IIB, (it is a windows service), we can have many integration nodes, each run on separate systems to provide protection against failure.

    Each integration node contains one or more processes that are called integration servers. An integration server can contain one or more message flows. Some message flows need message models or schemas. The integration node uses the message models and schemas to parse and optionally, validate the message content and construct predefined messages.

    Application: is a container for messages flows and resources, All these applications are run on Integration server to provide isolation and scalability.

    Message flows and integrate with external system  such as Web-Services and DBs

    Integration node can be managed by 
    - Web user interface
    - IIB Commands
    - Administrative Console 
    - Integration Toolkit

    IIB Versions: Express, Scale, Standard and Advanced



    Integration Toolkit





    Integration Nodes

    Create, Delete Integration nodes
    Connect to remote Integration nodes
    Deploy Message Flows to an integration server
    Start/Stop Integration node, Integration Server, and Message flows








    Sunday, May 22, 2022

    C# System.Collections.Generic

     1) ToLookup()

    1. It creates a Key based on the user's choice at runtime. In this article I have used the length of the string as the Key. So it stores data and creates the key based on the length of the string.
      E.g. "Lajapathy" has a length of 9, so the key is created with the value 9 and the value is "Lajapathy".
    2. Exactly the same concept of Dictionary<K, T>, but the key is not static; it is dynamic.
    3. Setting key at Runtime.
    4. It is very useful if using complex data type.
    5. It is useful to get data fast, because it stores as Index key.
    6. It is a KeyValue<K, T> pair.

    Example

    public static List<string> GetStringList()
        {
            List<string> list = new List<string>();
            list.Add("Lajapathy");
            list.Add("Sathiya");
            list.Add("Parthiban");
            list.Add("AnandBabu");
            list.Add("Sangita");
            list.Add("Lakshmi");
            return list;
        }

    .................................
    .................................

            List<string> list = GetStringList();
     
            //Sets KeyValue pair based on the string length.
            ILookup<intstring> lookup = list.ToLookup(i => i.Length);


            //Iterates only string length having 7.
            foreach (string temp in lookup[7])
            {
                HttpContext.Current.Response.Write(temp + "<br/>");
            }

    ======================================================

        public static List<Employee> EmployeeList()
        {
            List<Employee> emp = new List<Employee>();
            emp.Add(new Employee { ID = 100, Name = "Lajapathy", CompanyName = "FE" });
            emp.Add(new Employee { ID = 200, Name = "Parthiban", CompanyName = "FE" });
            emp.Add(new Employee { ID = 400, Name = "Sathiya", CompanyName = "FE" });
            emp.Add(new Employee { ID = 300, Name = "Anand Babu", CompanyName = "FE" });
            emp.Add(new Employee { ID = 300, Name = "Naveen", CompanyName = "HCL" });
            return emp;
        }
    ..............................................
    ..............................................
            List<Employee> empList = EmployeeList();
            //Creating KeyValue pair based on the ID. we can get items based on the ID.
            ILookup<intEmployee> lookList = empList.ToLookup(id => id.ID); 

            //Displaying who having the ID=100.
            foreach (Employee temp in lookList[100])
            {
                Console.WriteLine(temp.Name);
            } 

    ========================================================

    2) Max, Min methods

    using System.Linq;
    using System.Collections.Generic;
    ................

    List<int> list = new List<int>() { 5, -1, 4, 9, -7, 8 };

    int maxValue = list.Max();
    int maxIndex = list.IndexOf(maxValue);
     
    int minValue = list.Min();
    int minIndex = list.IndexOf(minValue);
     
    Console.WriteLine("Maximum element {0} present at index {1}", maxValue, maxIndex);
    Console.WriteLine("Minimum element {0} present at index {1}", minValue, minIndex);



    3) Where , FirstOrDefault


            List<string> myList = new List<string>();
            list.Add("Lajapathy");
            list.Add("Sathiya");
            list.Add("Parthiban");
            list.Add("AnandBabu");
            list.Add("Sangita");
            list.Add("Lakshmi");


           //return the first item which matches your criteria or Null
          string result = myList.FirstOrDefault(s => s == "Lakshmi");


          //return all items which match your criteria
         IEnumerable<string> results = myList.Where(s => s == search);



    4) LIKE operator in LINQ


    Typically you use String.StartsWith/EndsWith/Contains. For example:


    public class Student{
    public int Id;
    public string Name;
    }

    var students= new List<Student>() { 
                    new Student(){ Id = 1, Name="Bill"},
                    new Student(){ Id = 2, Name="Steve"},
                    new Student(){ Id = 3, Name="Ram"},
                    new Student(){ Id = 4, Name="Abdul"}
                };

    var id = students
                           .Where(p => p.Name.Contains("u"))
                           .FirstOrDefault()
                           .Id;


    5) AddRange to Append to List


    var favouriteCities = new List<string>();
    var popularCities = new List<string>();

    string[] cities = new string[3]{ "Mumbai", "London", "New York" };

    popularCities.AddRange(cities);
    favouriteCities.AddRange(popularCities);


    6) Remove vs RemoveAt


    var numbers = new List<int>(){ 10, 20, 30, 40, 10 };
    
    numbers.Remove(10); // removes the first 10 from a list
    
    numbers.RemoveAt(2); //removes the 3rd element (index starts from 0)



    7) Contains() to Check Elements in List


    var numbers = new List<int>(){ 10, 20, 30, 40 };
    numbers.Contains(10); // returns true
    numbers.Contains(11); // returns false



    8) Sort vs Reverse()


    var words = new List<string> {"falcon", "order", "war", "sky", "ocean", "blue", "cloud", "boy"};

    words.Sort();
    Console.WriteLine(string.Join(",", words));

    words.Reverse();     // descending order
    Console.WriteLine(string.Join(",", words));



    9) Linq OrderBy()


    class Pet { public string Name { get; set; } public int Age { get; set; } } public static void OrderByEx1() { Pet[] pets = { new Pet { Name="Barley", Age=8 }, new Pet { Name="Boots", Age=4 }, new Pet { Name="Whiskers", Age=1 } }; IEnumerable<Pet> query = pets.OrderBy(pet => pet.Age); foreach (Pet pet in query) { Console.WriteLine("{0} - {1}", pet.Name, pet.Age); } } /* This code produces the following output: Whiskers - 1 Boots - 4 Barley - 8 */


    10)  Linq: from where select



    // Specify the data source. int[] scores = { 97, 92, 81, 60 }; // Define the query expression. IEnumerable<int> scoreQuery = from score in scores where score > 80 select score; // Execute the query. foreach (int i in scoreQuery) { Console.Write(i + " "); } // Output: 97 92 81



    11) Linq Distinct, OrderBy 


    string s = "efgabcddddddaaaaaaaaaaa";
    List<char> myList = s.Distinct().OrderBy(q => q).ToList();
    Console.Write(string.Join(">",myList));    //a>b>c>d>e>f>g