Does OS thread get blocked on io performed by go-routine?


On my machine there are 4 logical processors. so there are four contexts P1, P2, P3 & P4 working with OS threads M1, M2, M3 & M4

$ lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              4
On-line CPU(s) list: 0-3
Thread(s) per core:  2
Core(s) per socket:  2
Socket(s):           1

In the below code:

package main

import (

func getPage(url string) (int, error) {
    resp, err := http.Get(url)
    if err != nil {
        return 0, err

    defer resp.Body.Close()

    body, err := ioutil.ReadAll(resp.Body)
    if err != nil {
        return 0, err

    return len(body), nil

func worker(urlChan chan string, sizeChan chan<- string, i int) {
    for {
        url := <-urlChan
        length, err := getPage(url)
        if err == nil {
            sizeChan <- fmt.Sprintf("%s has length %d (%d)", url, length, i)
        } else {
            sizeChan <- fmt.Sprintf("%s has error %s (%d)", url, err, i)

func main() {

    urls := []string{"", "",
        "", "", "", ""}

    urlChan := make(chan string)
    sizeChan := make(chan string)

    for i := 0; i < len(urls); i++ {
        go worker(urlChan, sizeChan, i)

    for _, url := range urls {
        urlChan <- url

    for i := 0; i < len(urls); i++ {
        fmt.Printf("%s\n", <-sizeChan)


there are six go-routines that perform http.Get()


Does OS thread(M1) get blocked with go-routine(G1) on io(http.Get())? on context P1


Does Go scheduler pre-empt go-routine(G1) from OS thread(M1) upon http.Get()? and assign G2 to M1… if yes, on pre-emption of G1, how G1 is managed by Goruntime to resume G1 upon completion of IO(http.Get)?


What is the api to retrieve context number(P) used for each go-routine(G)? for debugging purpose..

3) we maintain critical section using counted semaphore for above reader writer problem using C pthreads library. Why are we not getting into the usage of critical sections using go-routines and channels?


No, it doesn’t block. My rough (and unsourced, I picked it up through osmosis) understanding is that whenever a goroutine wants to perform a “blocking” I/O that has an equivalent non-blocking version,

  1. Performs a non-blocking version instead.
  2. Records its own ID in a table somewhere keyed by the handle it is “blocking” on.
  3. Transfers responsibility for the completion to a dedicated thread which sits in a select loop (or poll or whatever equivalent is available) waiting for such operations to unblock, and
  4. Suspends itself, freeing up its OS thread (M) to run another goroutine.

When the I/O operation unblocks, the select-loop looks in the table to figure out which goroutine was interested in the result, and schedules it to be run. In this way, goroutines waiting for I/O do not occupy an OS thread.

In case of I/O that can’t be done non-blockingly, or any other blocking syscall, the goroutine executes the syscall through a runtime function that marks its thread as blocked, and the runtime will create a new OS thread for goroutines to be scheduled on. This maintains the ability to have GOMAXPROCS running (not blocked) goroutines. This doesn’t cause very much thread bloat for most programs, since the most common syscalls for dealing with files, sockets, etc. have been made async-friendly. (Thanks to @JimB for reminding me of this, and the authors of the helpful linked answers.)

Answered By – hobbs

Answer Checked By – David Goodson (GoLangFix Volunteer)

Leave a Reply

Your email address will not be published.